Vision and Improved Learned-Trajectory Replay for Assistive-Feeding and Food-Plating Robots

Travers Rhodes
Master's Thesis, Tech. Report, CMU-RI-TR-19-55, August, 2019

Download Publication

Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract

Food manipulation offers an interesting frontier for robotics research because of the direct application of this research to real-world problems and the challenges involved in robust manipulation of deformable food items. In this work, we focus on the challenges associated with robots manipulating food for assistive feeding and meal preparation. This work focuses on how we can teach robots visual perception of the objects to manipulate, create error recovery and feedback systems, and improve on kinesthetic teaching of manipulation trajectories. This work includes several complete implementations of food manipulation robots for feeding and food plating on several robot platforms: a SoftBank Robotics Pepper, a Kinova MICO, a Niryo One, and a UR5.


@mastersthesis{Rhodes-2019-117062,
author = {Travers Rhodes},
title = {Vision and Improved Learned-Trajectory Replay for Assistive-Feeding and Food-Plating Robots},
year = {2019},
month = {August},
school = {},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-19-55},
keywords = {Assistive Robotics, Manipulation, Food},
} 2019-08-06T07:58:35-04:00