Facial Asymmetry Quantification for Expression Invariant Human Identification - Robotics Institute Carnegie Mellon University

Facial Asymmetry Quantification for Expression Invariant Human Identification

Yanxi Liu, Karen Schmidt, Jeffrey Cohn, and Sinjini Mitra
Journal Article, Computer Vision and Image Understanding, Vol. 91, No. 2, pp. 138 - 159, July, 2003

Abstract

We investigate facial asymmetry as a biometric under expression variation. For the fi rst time, we have defi ned two types of quantifi ed facial asymmetry measures that are easily computable from facial images and videos. Our fi ndings show that the asymmetry measures of automatically selected facial regions capture individual differences that are relatively stable to facial expression variations. More importantly, a synergy is achieved by combining facial asymmetry information with conventional EigenFace and FisherFace methods. We have assessed the generality of these fi ndings across two publicly available face databases: Using a random subset of 110 subjects from the FERET database, a 38% classifi cation error reduction rate is obtained. Error reduction rates of 45% to 100% are achieved on 55 subjects from the Cohn-Kanade AU-coded facial expression database. These results suggest that facial asymmetry may provide complementary discriminative information to human identi cation methods, which has been missing in automatic human identifi cation.

BibTeX

@article{Liu-2003-8692,
author = {Yanxi Liu and Karen Schmidt and Jeffrey Cohn and Sinjini Mitra},
title = {Facial Asymmetry Quantification for Expression Invariant Human Identification},
journal = {Computer Vision and Image Understanding},
year = {2003},
month = {July},
volume = {91},
number = {2},
pages = {138 - 159},
}