Real-time expression cloning using active appearance models - Robotics Institute Carnegie Mellon University

Real-time expression cloning using active appearance models

B. Theobald, Iain Matthews, Jeffrey Cohn, and S. Boker
Conference Paper, Proceedings of 9th International Conference on Multimodal Interfaces (ICMI '07), pp. 134 - 139, November, 2007

Abstract

Active Appearance Models (AAMs) are generative parametric models commonly used to track, recognise and synthesise faces in images and video sequences. In this paper we describe a method for transferring dynamic facial gestures between subjects in real-time. The main advantages of our approach are that: 1) the mapping is computed automatically and does not require high-level semantic information describing facial expressions or visual speech gestures. 2) The mapping is simple and intuitive, allowing expressions to be transferred and rendered in real-time. 3) The mapped expression can be constrained to have the appearance of the target producing the expression, rather than the source expression imposed onto the target face. 4) Near-videorealistic talking faces for new subjects can be created without the cost of recording and processing a complete training corpus for each. Our system enables face-to-face interaction with an avatar driven by an AAM of an actual person in real-time and we show examples of arbitrary expressive speech frames cloned across different subjects.

BibTeX

@conference{Theobald-2007-17049,
author = {B. Theobald and Iain Matthews and Jeffrey Cohn and S. Boker},
title = {Real-time expression cloning using active appearance models},
booktitle = {Proceedings of 9th International Conference on Multimodal Interfaces (ICMI '07)},
year = {2007},
month = {November},
pages = {134 - 139},
}