Probabilistic Planning for Robotic Exploration

Trey Smith
doctoral dissertation, tech. report CMU-RI-TR-07-26, Robotics Institute, Carnegie Mellon University, July, 2007


Download
  • Adobe portable document format (pdf) (6MB)
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract
Robotic exploration tasks involve inherent uncertainty. They typically include navigating through unknown terrain, searching for features that may or may not be present, and intelligently reacting to data from noisy sensors (for example, a search and rescue robot, believing it has detected a trapped earthquake victim, might stop to check for signs of life). Exploration domains are distinguished both by the prevalence of uncertainty and by the importance of intelligent information gathering. An exploring robot must understand what unknown information is most relevant to its goals, how to gather that information, and how to incorporate the results into its future actions. This thesis has two main components. First, we present planning algorithms that generate robot control policies for partially observable Markov decision process (POMDP) planning problems. POMDP models explicitly represent the uncertain state of the world using a probability distribution over possible states, and they allow the planner to reason about information gathering actions in a way that is decision theoretically optimal. Relative to existing POMDP planning algorithms, our algorithms can more quickly generate approximately optimal policies, taking advantage of innovations in efficient value function representation, heuristic search, and state abstraction. This improved POMDP planning is important both to exploration domains and to a wider class of decision problems. Second, we demonstrate the relevance of onboard science data analysis and POMDP planning to robotic exploration. Our experiments centered around a robot deployed to map the distribution of life in the Atacama Desert of Chile, using operational techniques similar to a Mars mission. We found that science autonomy and POMDP planning techniques significantly improved science yield for exploration tasks conducted both in simulation and onboard the robot.

Notes
Number of pages: 263

Text Reference
Trey Smith, "Probabilistic Planning for Robotic Exploration," doctoral dissertation, tech. report CMU-RI-TR-07-26, Robotics Institute, Carnegie Mellon University, July, 2007

BibTeX Reference
@phdthesis{Smith_2007_5970,
   author = "Trey Smith",
   title = "Probabilistic Planning for Robotic Exploration",
   booktitle = "",
   school = "Robotics Institute, Carnegie Mellon University",
   month = "July",
   year = "2007",
   number= "CMU-RI-TR-07-26",
   address= "Pittsburgh, PA",
}