Active Markov Information-Theoretic Path Planning for Robotic Environmental Sensing

Kian Hsiang Low, John M. Dolan, and Pradeep Khosla
Proceedings of the 10th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS-11), May, 2011, pp. 753-760.


Download
  • Adobe portable document format (pdf) (898KB)
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract
Recent research in multi-robot exploration and mapping has focused on sampling environmental fields, which are typically modeled using the Gaussian process (GP). Existing information-theoretic exploration strategies for learning GPbased environmental eld maps adopt the non-Markovian problem structure and consequently scale poorly with the length of history of observations. Hence, it becomes computationally impractical to use these strategies for in situ, realtime active sampling. To ease this computational burden, this paper presents a Markov-based approach to efficient information-theoretic path planning for active sampling of GP-based fields. We analyze the time complexity of solving the Markov-based path planning problem, and demonstrate analytically that it scales better than that of deriving the non-Markovian strategies with increasing length of planning horizon. For a class of exploration tasks called the transect sampling task, we provide theoretical guarantees on the active sampling performance of our Markov-based policy, from which ideal environmental field conditions and sampling task settings can be established to limit its performance degradation due to violation of the Markov assumption. Empirical evaluation on real-world temperature and plankton density eld data shows that our Markov-based policy can generally achieve active sampling performance comparable to that of the widely-used non-Markovian greedy policies under less favorable realistic eld conditions and task settings while enjoying signi cant computational gain over them.

Keywords
multirobot exploration, multirobot mapping, adaptive sampling, active learning, Gaussian process, non-myopic path planning

Notes
Associated Lab(s) / Group(s): Tele-Supervised Autonomous Robotics
Associated Project(s): Telesupervised Adaptive Ocean Sensor Fleet and Robot Sensor Boat
Number of pages: 8

Text Reference
Kian Hsiang Low, John M. Dolan, and Pradeep Khosla, "Active Markov Information-Theoretic Path Planning for Robotic Environmental Sensing," Proceedings of the 10th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS-11), May, 2011, pp. 753-760.

BibTeX Reference
@inproceedings{Low_2011_6895,
   author = "Kian Hsiang Low and John M Dolan and Pradeep Khosla",
   title = "Active Markov Information-Theoretic Path Planning for Robotic Environmental Sensing",
   booktitle = "Proceedings of the 10th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS-11)",
   pages = "753-760",
   month = "May",
   year = "2011",
}