Vision Augmented Robot Feeding

Alexandre Candeias, Travers Rhodes, Manuel Marques, Joao P Costeira and Manuela Veloso
Conference Paper, ECCV 2018: Computer Vision – ECCV 2018 Workshops, pp. 50-65, January, 2019

View Publication

Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.


Researchers have over time developed robotic feeding assistants to help at meals so that people with disabilities can live more autonomous lives. Current commercial feeding assistant robots acquire food without feedback on acquisition success and move to a preprogrammed location to deliver the food. In this work, we evaluate how vision can be used to improve both food acquisition and delivery. We show that using visual feedback on whether food was captured increases food acquisition efficiency. We also show how Discriminative Optimization (DO) can be used in tracking so that the food can be effectively brought all the way to the user’s mouth, rather than to a preprogrammed feeding location.

A. Candeias and T. Rhodes contributed equally.

author = {Alexandre Candeias and Travers Rhodes and Manuel Marques and Joao P Costeira and Manuela Veloso},
title = {Vision Augmented Robot Feeding},
booktitle = {ECCV 2018: Computer Vision – ECCV 2018 Workshops},
year = {2019},
month = {January},
editor = {Laura Leal-Taixe and Stefan Roth},
pages = {50-65},
publisher = {Springer},
keywords = {Assistive technologies, Manipulation aids, Computer vision, Feeding assistance},
} 2019-02-06T14:07:38-04:00