Using Dialog and Human Observations to Dictate Tasks to a Learning Robot Assistant - Robotics Institute Carnegie Mellon University

Using Dialog and Human Observations to Dictate Tasks to a Learning Robot Assistant

Paul Rybski, Jeremy Stolarz, Kevin Yoon, and Manuela Veloso
Journal Article, Journal of Intelligent Service Robots, Special Issue on Multidisciplinary Collaboration for Socially Assistive Robotics, Vol. 1, No. 2, pp. 159 - 167, April, 2008

Abstract

Robot assistants need to interact with people in a natural way in order to be accepted into people’s day-to-day lives. We have been researching robot assistants with capabilities that include visually tracking humans in the environment, identifying the context in which humans carry out their activities, understanding spoken language (with a fixed vocabulary), participating in spoken dialogs to resolve ambiguities, and learning task procedures. In this paper, we describe a robot task learning algorithm in which the human explicitly and interactively instructs a series of steps to the robot through spoken language. The training algorithm fuses the robot’s perception of the human with the understood speech data, maps the spoken language to robotic actions, and follows the human to gather the action applicability state information. The robot represents the acquired task as a conditional procedure and engages the human in a spoken-language dialog to fill in information that the human may have omitted.

BibTeX

@article{Rybski-2008-9930,
author = {Paul Rybski and Jeremy Stolarz and Kevin Yoon and Manuela Veloso},
title = {Using Dialog and Human Observations to Dictate Tasks to a Learning Robot Assistant},
journal = {Journal of Intelligent Service Robots, Special Issue on Multidisciplinary Collaboration for Socially Assistive Robotics},
year = {2008},
month = {April},
volume = {1},
number = {2},
pages = {159 - 167},
keywords = {human robot interaction},
}