Tracking in Unstructured Crowded Scenes - Robotics Institute Carnegie Mellon University

Tracking in Unstructured Crowded Scenes

Mikel Rodriguez, Saad Ali, and Takeo Kanade
Conference Paper, Proceedings of (ICCV) International Conference on Computer Vision, pp. 1389 - 1396, September, 2009

Abstract

This paper presents a target tracking framework for unstructured crowded scenes. Unstructured crowded scenes are defined as those scenes where the motion of a crowd appears to be random with different participants moving in different directions over time. This means each spatial loca- tion in such scenes supports more than one, or multi-modal, crowd behavior. The case of tracking in structured crowded scenes, where the crowd moves coherently in a common direction, and the direction of motion does not vary over time, was previously handled in [1]. In this work, we propose to model various crowd behavior (or motion) modalities at different locations of the scene by employing Correlated Topic Model (CTM) of [16]. In our construction, words correspond to low level quantized motion features and topics correspond to crowd behaviors. It is then assumed that motion at each location in an unstructured crowd scene is generated by a set of behavior proportions, where behaviors represent distributions over low-level motion features. This way any one location in the scene may support multiple crowd behavior modalities and can be used as prior information for tracking. Our approach enables us to model a diverse set of unstructured crowd domains, which range from cluttered time-lapse microscopy videos of cell populations in vitro, to footage of crowded sporting events.

BibTeX

@conference{Rodriguez-2009-10329,
author = {Mikel Rodriguez and Saad Ali and Takeo Kanade},
title = {Tracking in Unstructured Crowded Scenes},
booktitle = {Proceedings of (ICCV) International Conference on Computer Vision},
year = {2009},
month = {September},
pages = {1389 - 1396},
}