Home/Active Appearance Models with Occlusion

Active Appearance Models with Occlusion

Ralph Gross, Iain Matthews and Simon Baker
Journal Article, Image and Vision Computing, Vol. 24, No. 6, pp. 593-604, January, 2006

View Publication

Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.


Active Appearance Models (AAMs) are generative parametric models that have been successfully used in the past to track faces in video. A variety of video applications are possible, including dynamic head pose and gaze estimation for real-time user interfaces, lip-reading, and expression recognition. To construct an AAM, a number of training images of faces with a mesh of canonical feature points (usually hand-marked) are needed. All feature points have to be visible in all training images. However, in many scenarios parts of the face may be occluded. Perhaps the most common cause of occlusion is 3D pose variation, which can cause self-occlusion of the face. Furthermore, tracking using standard AAM fitting algorithms often fails in the presence of even small occlusions. In this paper we propose algorithms to construct AAMs from occluded training images and to track faces efficiently in videos containing occlusion. We evaluate our algorithms both quantitatively and qualitatively and show successful real-time face tracking on a number of image sequences containing varying degrees and types of occlusions.

author = {Ralph Gross and Iain Matthews and Simon Baker},
title = {Active Appearance Models with Occlusion},
journal = {Image and Vision Computing},
year = {2006},
month = {January},
volume = {24},
number = {6},
pages = {593-604},
} 2017-09-13T10:42:59-04:00