Learning discrete Bayesian models for autonomous agent navigation - Robotics Institute Carnegie Mellon University

Learning discrete Bayesian models for autonomous agent navigation

Conference Paper, Proceedings of IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA '99), pp. 137 - 143, November, 1999

Abstract

Partially observable Markov decision processes (POMDPs) are a convenient representation for reasoning and planning in mobile robot applications. We investigate two algorithms for learning POMDPs from series of observation/action pairs by comparing their performance in fourteen synthetic worlds in conjunction with four planning algorithms. Experimental results suggest that the traditional Baum-Welch algorithm learns better the structure of worlds specifically designed to impede the agent, while a best-first model merging algorithm originally due to Stolcke and Omohundro (1993) performs better in more benign worlds, including such model of typical real-world robot fetching tasks.

BibTeX

@conference{Nikovski-1999-15057,
author = {Daniel Nikovski and Illah Nourbakhsh},
title = {Learning discrete Bayesian models for autonomous agent navigation},
booktitle = {Proceedings of IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA '99)},
year = {1999},
month = {November},
pages = {137 - 143},
}