My research lies at the intersection of robotics, machine learning, and computer vision.
I am interested in developing methods for robotic perception and control that can allow robots to operate in in the messy, cluttered environments of our daily lives. My approach is to design new deep learning / machine learning algorithms to understand environmental changes: how dynamic objects in the environment can move and how to affect the environment to achieve a desired task.
I have applied this idea of learning to understand environmental changes to improve a robot’s capabilities in two domains: object manipulation and autonomous driving. I am currently working on learning to control indoor robots for various object manipulation tasks, dealing with questions about multi-task learning, robust learning, simulation to real-world transfer, and safety. Within autonomous driving, I have shown how, by modeling object appearance changes, we can improve a robot’s capabilities for every part of the robot perception pipeline: segmentation, tracking, velocity estimation, and object recognition. By teaching robots to understand and affect environmental changes, I hope to open the door to many new robotics applications, such as robots for our homes, assisted living facilities, schools, hospitals, or disaster relief areas.
- Robot Programming by Demonstration
- Active Perception
- Robot Learning for Manipulation
- Perceptual Robotics
- Computer Vision
- Visual Servoing and Visual Tracking
- Reinforcement Learning
- Learning and Classification
- 3-D Vision and Recognition
- Human-Centered Robotics
- Sensing & Perception
- Manipulation & Interfaces
- Human Robot Collaboration
- Deep Learning