Multi?tate Based Facial Feature Tracking and Detection - Robotics Institute Carnegie Mellon University

Multi?tate Based Facial Feature Tracking and Detection

Ying-Li Tian, Takeo Kanade, and Jeffrey Cohn
Tech. Report, CMU-RI-TR-99-18, Robotics Institute, Carnegie Mellon University, August, 1999

Abstract

Accurately and robustly tracking facial features must cope with the large variation in appearance across subjects and the combination of rigid and non?igid motion. We present a work toward a robust system to detect and track facial features including both permanent (e.g. mouth, eye, and brow) and transient (e.g. furrows and wrinkles) facial features in a nearly frontal image sequence. Multi?tate facial component models are proposed for tracking and modeling different facial features. Based on these multi?tate models, and without any artificial enhancement, we detect and track the facial features, including mouth, eyes, brows, cheeks, and their related wrinkles and facial furrows by combining color, shape, edge and motion information. Given the initial location of the facial features in the first frame, the facial features can be detected or tracked in remainder images automatically. Our system is tested on 500 image sequences from the Pittsburgh?arnegie Mellon University (Pitt?MU) Facial Expression Action Unit (AU) Coded Database, which includes image sequences from children and adults of European, African, and Asian ancestry. Accurate tracking results are obtained in 98% of image sequences.

BibTeX

@techreport{Tian-1999-14992,
author = {Ying-Li Tian and Takeo Kanade and Jeffrey Cohn},
title = {Multi?tate Based Facial Feature Tracking and Detection},
year = {1999},
month = {August},
institute = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-99-18},
keywords = {Multiple?tate model, Lip tracking, Eye tracking, Furrow detection},
}