/Event Detection in Crowded Videos

Event Detection in Crowded Videos

Yan Ke, Rahul Sukthankar and Martial Hebert
Conference Paper, IEEE International Conference on Computer Vision, October, 2007

Download Publication (PDF)

Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author’s copyright. These works may not be reposted without the explicit permission of the copyright holder.


Real-world actions occur often in crowded, dynamic environments. This poses a difficult challenge for current approaches to video event detection because it is difficult to segment the actor from the background due to distracting motion from other objects in the scene. We propose a technique for event recognition in crowded videos that reliably identifies actions in the presence of partial occlusion and background clutter. Our approach is based on three key ideas: (1) we efficiently match the volumetric representation of an event against oversegmented spatio-temporal video volumes; (2) we augment our shape-based features using flow; (3) rather than treating an event template as an atomic entity, we separately match by parts (both in space and time), enabling robustness against occlusions and actor variability. Our experiments on human actions, such as picking up a dropped object or waving in a crowd show reliable detection with few false positives.

BibTeX Reference
author = {Yan Ke and Rahul Sukthankar and Martial Hebert},
title = {Event Detection in Crowded Videos},
booktitle = {IEEE International Conference on Computer Vision},
year = {2007},
month = {October},
keywords = {Event, Detection, Video, Action, Recognition, Computer, Vision, Surveillance},