My research interests lie in the area of AI in robotics including autonomy, language understanding, multimodal perception, path planning, and machine learning. I am interested in creating persistent robots that can co-exist with humans in shared environments, learning to improve themselves over time through continuous training, exploration, and interactions. Toward this general goal, I am currently focusing on the translation of information among vision, language, and planning so that robots can 1) perceive better by fusing with verbal information provided by humans, 2) describe what they have observed in natural language, 3) generate semantic navigation plans by following complex directions, 4) learn to navigate in a socially compliant way, and 5) explain their past, current, and future actions and plans. As for the communication modalities for robots, I am primarily interested in verbal (text or speech) communication in natural language, but I am also interested in alternative ways for robots to express what goes on in their internal states, e.g., communication modalities studied in the arts and design community such as media arts.
Currently, I am leading the perception and the learning tasks for the DARPA Aircrew-Labor In-Cockpit Automation System (ALIAS) program (co-PI) that aims to bring intelligence to cockpits through semantic perception and learning new skills from observing experienced pilots. I am also a PI on the intelligence architecture subtask of the ARL RCTA program where my team’s work on language understanding on robot navigation won the Best Cognitive Robotics Paper Award at the IEEE International Conference on Robotics and Automation (ICRA) in 2015. My newest project is the US DoD – Korea MOTIE (co-PI) collaboration on robotics technologies for disaster response, where I will be leading the semantic map construction by leveraging text data from social media and crowdsourcing.