doctoral dissertation, tech. report CMU-RI-TR-97-36, Robotics Institute, Carnegie Mellon University, August, 1997
|Programming by human demonstration is a new paradigm for the development of robotic applications that focuses on the needs of task experts rather than programming experts. The traditional text-based programming paradigm demands the user be an expert in a particular programming language and further demands that the user can translate the task into this foreign language. This level of programming expertise generally precludes the user from having detailed task expertise because his/her time is devoted to the practice of programming, not the practice of the task. The goal of programming by demonstration is to eliminate both the programming language expertise and, more importantly, the expertise required to translate the task into the language.
Gesture-Based Programming is a new form of programming by human demonstration that views the demonstration as a series of inexact ?gestures? that convey the ?intention? of the task strategy, not the details of the strategy itself. This is analogous to the type of ?programming? that occurs between human teacher and student and is more intuitive for both. However, it requires a ?shared ontology? between teacher and student -- in the form of a common skill database -- to abstract the observed gestures to meaningful intentions that can be mapped onto previous experiences and previously-acquired skills.
This thesis investigates several key components required for a Gesture-Based Programming environment that revolve around a common, though seemingly unrelated theme: sensor calibration. A novel approach to multi-axis sensor calibration based on shape and motion decomposition was developed as a companion to the development of some novel, fluid-based, wearable fingertip sensors for observing contact gestures during demonstration. ?Shape from Motion Calibration? does not require explicit refer-ences for each and every measurement. For force sensors, unknown, randomly-applied loads result in an accurate calibration matrix. The intrinsic ?shape? of the input/output mapping is extracted from the random ?motion? of the applied load through the sensing space. This ability to extract intrinsic structure led to a convenient eigenspace learning mechanism that provides three necessary pieces of the task interpretation and abstraction process: sensorimotor primitive acquisition (populating the skill database), primitive identification (relating gestures to skills in the database), and primitive transformation (?skill morphing?). This thesis demonstrates the technique for learning, identifying, and morphing simple manipulative primitives on a PUMA robot and interpreting the gestures of a human demonstrator in order to program a robot to perform the same task.
Associated Center(s) / Consortia:
National Robotics Engineering Center
Associated Lab(s) / Group(s): Advanced Mechatronics Lab
Associated Project(s): Gesture Based Programming
|Richard Voyles, "Toward Gesture-Based Programming: Agent-Based Haptic Skill Acquisition and Interpretation," doctoral dissertation, tech. report CMU-RI-TR-97-36, Robotics Institute, Carnegie Mellon University, August, 1997|
author = "Richard Voyles",
title = "Toward Gesture-Based Programming: Agent-Based Haptic Skill Acquisition and Interpretation",
booktitle = "",
school = "Robotics Institute, Carnegie Mellon University",
month = "August",
year = "1997",
address= "Pittsburgh, PA",
|The Robotics Institute is part of the School of Computer Science, Carnegie Mellon University.|
Contact Us | Update Instructions