A view-invariant internal representation of human activities and their context is essential for reasoning both normative and abnormal human behavior in a proper situational context. In the same way that changing the viewing angle of a person performing an activity does not change our interpretation of an activity, a truly autonomous surveillance system maintain an interpretation of the scene irrespective of (invariant to) the viewing angle, making it possible to understand, forecast or simulate possible outcomes of anomalous behaviors in real scenarios. In a joint effort with multiple groups at CMU, we develop a portfolio of methods and tools for human activity analysis that makes use of a rich view-invariant internal world representation to detect simple and complex human activities in a video surveillance scenario. My group focuses on efficient activity classification and localization for our autonomous surveillance system.
A View-Invariant Internal World Representation for Predictive Cognitive Human Activity Understanding
Associated Lab: DeLight
Displaying 0 Publications