I am interested in developing robotic systems that can physically interact with real-world environments, whether this be a mobile robot driving on off-road terrain, a sensing device measuring water quality in a stream, or a medical device being used on a human. In order for these applications to be realized, these technologies must be affordable and reliable. Sensor fusion is a great way to handle both of these constraints. Two cheap sensors can be better than one expensive sensor if these sensors have different failure modes and provide different types of complementary information. However, algorithms must be written that can monitor these failure modes, and combine the complementary information effectively. In general, better estimation algorithms must be developed that can handle uncertainty in these complex changing environments, and leveraging information from different sources can help with this process.
To this end, I am interested in harnessing research coming out of the computer vision and signal processing communities to extract salient features from varying data sources. I am then interested in machine learning and filtering techniques that can combine and make sense of these features. Machine learning can be used to train models that turn these feature spaces into state spaces, leveraging different data sources to help in the training process in a self-supervised sense. Filtering can be used to combine models provided by different data sources, taking into the account the reliability of each source.
To me, robotics is about building systems, and it's about interfacing software with the physical world. It's about combining sensing systems into larger sensing systems, and combining algorithms from different research fields as needed, in order to interface with the constraints of our reality.
|The Robotics Institute is part of the School of Computer Science, Carnegie Mellon University.|
Contact Us | Update Instructions