Recognizing Visual Signatures of Spontaneous Head Gestures - Robotics Institute Carnegie Mellon University

Recognizing Visual Signatures of Spontaneous Head Gestures

Mohit Sharma, Dragan Ahmetovic, Laszlo A. Jeni, and Kris M. Kitani
Conference Paper, Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV '18), pp. 400 - 408, March, 2018

Abstract

Head movements are an integral part of human nonverbal communication. As such, the ability to detect various types of head gestures from video is important for robotic systems that need to interact with people or for assistive technologies that may need to detect conversational gestures to aid communication. To this end, we propose a novel Multi-Scale Deep Convolution-LSTM architecture, capable of recognizing short and long term motion patterns found in head gestures, from video data of natural and unconstrained conversations. In particular, our models use Convolutional Neural Networks (CNNs) to learn meaningful representations from short time windows over head motion data. To capture longer term dependencies, we use Recurrent Neural Networks (RNNs) that extract temporal patterns across the output of the CNNs. We compare against classical approaches using discriminative and generative graphical models and show that our model is able to significantly outperform baseline models.

BibTeX

@conference{Sharma-2018-103570,
author = {Mohit Sharma and Dragan Ahmetovic and Laszlo A. Jeni and Kris M. Kitani},
title = {Recognizing Visual Signatures of Spontaneous Head Gestures},
booktitle = {Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV '18)},
year = {2018},
month = {March},
pages = {400 - 408},
}