Acquiring hand-action models by attention point analysis - Robotics Institute Carnegie Mellon University

Acquiring hand-action models by attention point analysis

Koichi Ogawara, Soshi Iba, Tomikazu Tanuki, Hiroshi Kimura, and Katsushi Ikeuchi
Conference Paper, Proceedings of (ICRA) International Conference on Robotics and Automation, Vol. 1, pp. 465 - 470, May, 2001

Abstract

This paper describes our current research on learning task level representations by a robot through observation of human demonstrations. We focus on human hand actions and represent such hand actions in symbolic task models. We propose a framework of such models by efficiently integrating multiple observations based on attention points; we then evaluate the model by using a human-form robot. We propose a two-step observation mechanism. At the first step, the system roughly observes the entire sequence of the human demonstration, builds a rough task model and extracts attention points (APs). The attention points indicate the time and position in the observation sequence that requires further detailed analysis. At the second step, the system closely examines the sequence around the APs and the obtained attribute values for the task model, such as what to grasp, which hand to be used, or what is the precise trajectory of the manipulated object. We implemented this system on a human form robot and demonstrated its effectiveness

BibTeX

@conference{Ogawara-2001-8233,
author = {Koichi Ogawara and Soshi Iba and Tomikazu Tanuki and Hiroshi Kimura and Katsushi Ikeuchi},
title = {Acquiring hand-action models by attention point analysis},
booktitle = {Proceedings of (ICRA) International Conference on Robotics and Automation},
year = {2001},
month = {May},
volume = {1},
pages = {465 - 470},
}