I am interested in understanding physical interaction with the environment — how do we select and apply exactly the right forces to maneuver bulky and heavy objects, scramble over large rocks using both hands and feet, or use hand held tools?
In robotics, a better understanding of these interaction forces can help us create more dexterous robots that are able to operate in an environment such as the home. In particular, we would like to create natural grasping and manipulation behavior using measured human examples as a resource. In initial experiments we have demonstrated a humanoid robot tumbling a variety of large, heavy objects using a strategy derived directly from a human example. Some of the questions that remain to be answered are “what does it really mean for a robot to perform a task in the same way as a person?”, and “how can we convert a collection of measured human examples into a robust control policy for a robot?”
In computer graphics, an understanding of interaction forces can help us to create more natural looking motion when a character climbs, performs athletic maneuvers, or manipulates objects. We have developed fast techniques for computing optimal, physically plausible motion. We are also exploring the importance of physical correctness in graphics applications. How physically incorrect can motion be before people start to notice? In other words, how much can we cheat?
One of my particular areas of interest in both robotics and graphics is the hand. Modeling convincing hand motion is very difficult; in fact the hand itself has almost as many degrees of freedom, or directions of motion as is typically used to model the entire rest of the body! However, observed motion of the hand often appears to be much less complex. By studying examples of human hand motion and studying human hand anatomy, we hope to characterize hand behavior in a way that can be exploited for easier control of animated hands and effective control of robot hands.