/A View-Invariant Internal World Representation for Predictive Cognitive Human Activity Understanding

A View-Invariant Internal World Representation for Predictive Cognitive Human Activity Understanding

Portrait of A View-Invariant Internal World Representation for Predictive Cognitive Human Activity Understanding
Associated Lab: DeLight
Last Project Publication Year: 2019

A view-invariant internal representation of human activities and their context is essential for reasoning both normative and abnormal human behavior in a proper situational context. In the same way that changing the viewing angle of a person performing an activity does not change our interpretation of an activity, a truly autonomous surveillance system maintain an interpretation of the scene irrespective of (invariant to) the viewing angle, making it possible to understand, forecast or simulate possible outcomes of anomalous behaviors in real scenarios. In a joint effort with multiple groups at CMU, we develop a portfolio of methods and tools for human activity analysis that makes use of a rich view-invariant internal world representation to detect simple and complex human activities in a video surveillance scenario. My group focuses on efficient activity classification and localization for our autonomous surveillance system.

Displaying 1 Publications
Improving Object Detection with Inverted Attention
Zeyi Huang, Wei Ke and Dong Huang

Conference Paper, IEEE Winter Conference on Applications of Computer Vision, September, 2019
2019-09-17T14:46:21-04:00