Leveraging Multimodal Sensory Data for Robust Cutting - Robotics Institute Carnegie Mellon University

Leveraging Multimodal Sensory Data for Robust Cutting

Master's Thesis, Tech. Report, CMU-RI-TR-19-64, August, 2019

Abstract

Cutting food is a challenging task due to the variety of material properties across food items. In addition, different events may occur while executing cutting actions which need to be detected for proper skill execution and termination. Due to occlusions, it is often difficult for solely vision to solve both these problems. However, by utilizing vibration feedback from contact microphones and robot force data, we can classify the toughness of an object as well as monitor the status of the robot while slicing. We show that by leveraging this information, the robot is able to adapt its cutting technique according to the material, which results in more robust cutting.

In this paper, we will cover our entire slicing robot pipeline from the ground up. We will introduce the entire experimental setup, a new robot interface that allows us to control the robot and teach it new skills such as the slicing skill, the vision system for picking up and placing objects onto the cutting board, and finally the multimodal system for monitoring the robot and classifying the toughness of objects while slicing, which enables the robot to adapt it slicing motion to the material.

BibTeX

@mastersthesis{Zhang-2019-117144,
author = {Kevin Zhang},
title = {Leveraging Multimodal Sensory Data for Robust Cutting},
year = {2019},
month = {August},
school = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-19-64},
keywords = {Cutting, Slicing, Multimodal, Vibrations, Cooking},
}