This page is provided for historical and archival purposes only. While the seminar dates are correct, we offer no guarantee of informational accuracy or link validity. Contact information for the speakers, hosts and seminar committee are certainly out of date.
Most autonomous indoor robots use landmark-based navigation schemes: the robot moves down corridors until it observes features (such as doors or corridor junctions) that indicate it should turn or stop. We implemented landmark-based navigation for Xavier and found it somewhat wanting: the robot would sometimes make mistakes, and when it got lost it had no reliable way to recover.
To remedy those problems, we have been developing a navigation scheme based on partially observable Markov decision process (POMDP) models. In our approach, a POMDP model is automatically compiled from a topological map of the environment. The Markov model is used to track the robot's position: action reports (from dead reckoning) and sensor reports (feature detectors) are used to update the probability distribution of Markov states. A path planner associates directives with each Markov state, and the robot repeatedly executes the directive with the highest total probability mass.
This probabilistic navigation scheme has several advantages over landmark-based navigation schemes: it is more robust to observation errors, it incorporates metric information in a natural way, and it can easily utilize additional sensor information to improve its position estimation capabilities. It also has advantages for indoor navigation over other schemes that represent uncertainty (e.g., using Kalman filters) because it can represent more general probability distributions.
This talk will discuss the probabilistic navigation method, how we use it to model space, and our experiments with Xavier. In addition, I will discuss our ongoing activities in learning better Markov models from experience, in decision-theoretic planning that utilizes probabilistic information to develop robust, efficient plans, and in learning and incorporating new (vision-based) feature detectors.