Acquisition of Dynamic Control Knowledge for a Robotic Manipulator - Robotics Institute Carnegie Mellon University

Acquisition of Dynamic Control Knowledge for a Robotic Manipulator

Andrew Moore
Conference Paper, Proceedings of (ICML) International Conference on Machine Learning, pp. 244 - 252, June, 1990

Abstract

To make efficient use of a dynamic system such as a mechanical manipulator, the robotic controller needs various models of its behaviour. I describe a method of learning in which all the experiences in the lifetime of the robot are explicitly remembered. They are stored in a manner which permits fast recall of the closest previous experience to any new situation. This leads to a very high rate of learning of the robot kinematics and dynamics which conventionally need to be derived analytically. The representation is a modified binary multidimensional tree called a sab-tree which stores state-action-behaviour triples. This permits fast prediction of the effects of proposed actions and, given a goal behaviour, permits fast generation of a candidate action. I also discuss how the system is made resistant to noisy inputs and adapts to environmental changes. I explain how appropriate actions can be selected in the cases where (i) there has been earlier success and (ii) experimentation is required. This can be used to transform dynamic control to a greatly simplified problem. I conclude with some simulated experiments which exhibit high rates of learning. The final experiment also illustrates how a compound learning task can be structured into a hierarchy of simple learning tasks.

BibTeX

@conference{Moore-1990-15774,
author = {Andrew Moore},
title = {Acquisition of Dynamic Control Knowledge for a Robotic Manipulator},
booktitle = {Proceedings of (ICML) International Conference on Machine Learning},
year = {1990},
month = {June},
pages = {244 - 252},
publisher = {Morgan Kaufmann},
}