Home/Spatio-temporal Shape and Flow Correlation for Action Recognition

Spatio-temporal Shape and Flow Correlation for Action Recognition

Yan Ke, Rahul Sukthankar and Martial Hebert
Conference Paper, Carnegie Mellon University, Visual Surveillance Workshop, June, 2007

Download Publication (PDF)

Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author’s copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract

This paper explores the use of volumetric features for action recognition. First, we propose a novel method to correlate spatio-temporal shapes to video clips that have been automatically segmented. Our method works on oversegmented videos, which means that we do not require background subtraction for reliable object segmentation. Next, we discuss and demonstrate the complementary nature of shape- and flow-based features for action recognition. Our method, when combined with a recent flow-based correlation technique, can detect a wide range of actions in video, as demonstrated by results on a long tennis video. Although not specifically designed for whole-video classification, we also show that our method? performance is competitive with current action classification techniques on a standard video classification dataset.

BibTeX Reference
@conference{Ke-2007-9752,
title = {Spatio-temporal Shape and Flow Correlation for Action Recognition},
author = {Yan Ke and Rahul Sukthankar and Martial Hebert},
booktitle = {Visual Surveillance Workshop},
keyword = {event, activity, action, recognition, video, space-time, shape, flow},
sponsor = {NSF},
grantID = {IIS-0534962},
school = {Robotics Institute , Carnegie Mellon University},
month = {June},
year = {2007},
address = {Pittsburgh, PA},
}
2017-09-13T10:42:09+00:00