To React or not to React: End-to-End Visual Pose Forecasting for Personalized Avatar during Dyadic Conversations - Robotics Institute Carnegie Mellon University

To React or not to React: End-to-End Visual Pose Forecasting for Personalized Avatar during Dyadic Conversations

Chaitanya Ahuja, Shugao Ma, Louis-Philippe Morency, and Yaser Sheikh
Conference Paper, Proceedings of 21st ACM International Conference on Multimodal Interaction (ICMI '19), pp. 74 - 84, October, 2019

Abstract

Non verbal behaviours such as gestures, facial expressions, body posture, and para-linguistic cues have been shown to complement or clarify verbal messages. Hence to improve telepresence, in form of an avatar, it is important to model these behaviours, especially in dyadic interactions. Creating such personalized avatars not only requires to model intrapersonal dynamics between a avatar’s speech and their body pose, but it also needs to model interpersonal dynamics with the interlocutor present in the conversation. In this paper, we introduce a neural architecture named Dyadic Residual-Attention Model (DRAM), which integrates intrapersonal (monadic) and interpersonal (dyadic) dynamics using selective attention to generate sequences of body pose conditioned on audio and body pose of the interlocutor and audio of the human operating the avatar. We evaluate our proposed model on dyadic conversational data consisting of pose and audio of both participants, confirming the importance of adaptive attention between monadic and dyadic dynamics when predicting avatar pose. We also conduct a user study to analyze judgments of human observers. Our results confirm that the generated body pose is more natural, models intrapersonal dynamics and interpersonal dynamics better than non-adaptive monadic/dyadic models.

BibTeX

@conference{Ahuja-2019-122158,
author = {Chaitanya Ahuja and Shugao Ma and Louis-Philippe Morency and Yaser Sheikh},
title = {To React or not to React: End-to-End Visual Pose Forecasting for Personalized Avatar during Dyadic Conversations},
booktitle = {Proceedings of 21st ACM International Conference on Multimodal Interaction (ICMI '19)},
year = {2019},
month = {October},
pages = {74 - 84},
}