Policy Recognition for Multi-Player Tactical Scenarios - Robotics Institute Carnegie Mellon University

Policy Recognition for Multi-Player Tactical Scenarios

Conference Paper, Proceedings of 6th International Joint Conference on Autonomous Agents and MultiAgent Systems (AAMAS '07), pp. 58 - 65, May, 2007

Abstract

This paper addresses the problem of recognizing policies given logs of battle scenarios from multi-player games. The ability to identify individual and team policies from observations is important for a wide range of applications including automated commentary generation, game coaching, and opponent modeling. We define a policy as a preference model over possible actions based on the game state, and a team policy as a collection of individual policies along with an assignment of players to policies. This paper explores two promising approaches for policy recognition: (1) a model-based system for combining evidence from observed events using Dempster-Shafer theory, and (2) a data-driven discriminative classifier using support vector machines (SVMs). We evaluate our techniques on logs of real and simulated games played using Open Gaming Foundation d20, the rule system used by many popular tabletop games, including Dungeons and Dragons.

BibTeX

@conference{Sukthankar-2007-9714,
author = {Gita Sukthankar and Katia Sycara},
title = {Policy Recognition for Multi-Player Tactical Scenarios},
booktitle = {Proceedings of 6th International Joint Conference on Autonomous Agents and MultiAgent Systems (AAMAS '07)},
year = {2007},
month = {May},
pages = {58 - 65},
keywords = {plan recognition, multi-player games, Dempster-Shafer evidential reasoning, SVMs},
}