Subjective Approximate Solutions for Decentralized POMDPs - Robotics Institute Carnegie Mellon University

Subjective Approximate Solutions for Decentralized POMDPs

Conference Paper, Proceedings of 6th International Joint Conference on Autonomous Agents and MultiAgent Systems (AAMAS '07), pp. 862 - 864, May, 2007

Abstract

A problem of planning for cooperative teams under uncertainty is a crucial one in multiagent systems. Decentralized partially observable Markov decision processes (DEC-POMDPs) provide a convenient, but intractable model for specifying planning problems in cooperative teams. Compared to the single-agent case, an additional challenge is posed by the lack of free communication between the teammates. We argue, that acting close to optimally in a team involves a tradeoff between opportunistically taking advantage of agent's local observations and being predictable for the teammates. We present a more opportunistic version of an existing approximate algorithm for DEC-POMDPs and investigate the tradeoff. Preliminary evaluation shows that in certain settings oportunistic modification provides significantly better performance.

BibTeX

@conference{Chechetka-2007-9745,
author = {Anton Chechetka and Katia Sycara},
title = {Subjective Approximate Solutions for Decentralized POMDPs},
booktitle = {Proceedings of 6th International Joint Conference on Autonomous Agents and MultiAgent Systems (AAMAS '07)},
year = {2007},
month = {May},
pages = {862 - 864},
}