Experiments with Learning for Bipedal Locomotion and Fixed-wing Aerial Acrobatics
Mauldin Auditorium (NSH 1305 )
Time: 3:30 to 4:30 pm
Russ Tedrake is an Assistant Professor in the Department of Electrical Engineering and Computer Science at MIT, and a member of the Computer Science and Artificial Intelligence Lab. He received his B.S.E. in Computer Engineering from the University of Michigan, Ann Arbor, in 1999, and his Ph.D. in Electrical Engineering and Computer Science from MIT in 2004, working with Sebastian Seung in neuroscience. After graduation, he joined the MIT Brain and Cognitive Sciences Department as a Postdoctoral Associate. During his education, he has also spent time at Microsoft, Microsoft Research, and the Santa Fe Institute.
Machine learning techniques which approximate solutions to optimal control problems in robotics have the promise of producing nonlinear control policies which exploit (instead of canceling out) the nonlinear dynamics of our machines. In this talk, I will develop this idea in the context of designing minimal feedback policies for bipedal robots based on passive-dynamic walkers, and in designing feedback policies for aircraft operating in post-stall flight conditions in order to quickly land on a perch. In both cases, the nonlinear, underactuated dynamics of the machine can be exploited to obtain superior performance (executing less conservative trajectories with less energy). I'll discuss our learning solutions for these problems based on value iteration, pure policy-gradient, and actor-critic reinforcement learning, and argue that incorporating domain specific knowledge of the dynamics is - and will continue to be - an essential ingredient for success.
For appointments, please contact Jean Harpley(email@example.com)