Recovering the Basic Structure of Human Activities from Noisy Video-based Symbol Strings - Robotics Institute Carnegie Mellon University

Recovering the Basic Structure of Human Activities from Noisy Video-based Symbol Strings

Kris M. Kitani, Yoichi Sato, and Akihiro Sugimoto
Journal Article, International Journal of Pattern Recognition and Artificial Intelligence, Vol. 22, No. 8, pp. 1621 - 1646, December, 2008

Abstract

In recent years stochastic context-free grammars have been shown to be effective in modeling human activities because of the hierarchical structures they represent. However, most of the research in this area has yet to address the issue of learning the activity grammars from a noisy input source, namely, video. In this paper, we present a framework for identifying noise and recovering the basic activity grammar from a noisy symbol string produced by video. We identify the noise symbols by finding the set of non-noise symbols that optimally compresses the training data, where the optimality of compression is measured using an MDL criterion. We show the robustness of our system to noise and its effectiveness in learning the basic structure of human activity, through experiments with artificial data and a real video sequence from a local convenience store.

BibTeX

@article{Kitani-2008-109773,
author = {Kris M. Kitani and Yoichi Sato and Akihiro Sugimoto},
title = {Recovering the Basic Structure of Human Activities from Noisy Video-based Symbol Strings},
journal = {International Journal of Pattern Recognition and Artificial Intelligence},
year = {2008},
month = {December},
volume = {22},
number = {8},
pages = {1621 - 1646},
}