My research interests center upon using machine learning techniques that build low level robot motion control algorithms, also called policies. The execution of motion tasks is central to the success of many robotics applications. However, developing policies which enable skillful execution within real world environments is often challenging and nontrivial. One solution is to have a robot learn its policy. I focus on the particular approach of Learning from Demonstration. Within this learning paradigm, a teacher demonstrates to provide example executions of a task, and from these the learner generalizes a control policy.
My thesis explores ways in which a human teacher might effectively train and advise a robot learner. The robot learning algorithms I have developed build policies through a combination of human demonstration and advice. The general framework is that demonstration provides the learner with an initial policy. Human advice is then offered in response to policy execution by the robot learner. This advice is incorporated by the learner, to improve its policy. In earlier work, advice was a binary critique of robot performance. More recent work has developed a richer advice representation, through which the human teacher may provide corrections on the robot execution. These algorithms are implemented and validated on Segway RMP robots, performing planar motion tasks.