Confidence Preserving Machine for Facial Action Unit Detection - Robotics Institute Carnegie Mellon University

Confidence Preserving Machine for Facial Action Unit Detection

Jiabei Zeng, Wen-Sheng Chu, Fernando De la Torre Frade, Jeffrey Cohn, and Zhang Xiong
Conference Paper, Proceedings of (ICCV) International Conference on Computer Vision, pp. 3622 - 3630, December, 2015

Abstract

Varied sources of error contribute to the challenge of facial action unit detection. Previous approaches address specific and known sources. However, many sources are unknown. To address the ubiquity of error, we propose a Confident Preserving Machine (CPM) that follows an easy-to-hard classification strategy. During training, CPM learns two confident classifiers. A confident positive classifier separates easily identified positive samples from all else; a confident negative classifier does same for negative samples. During testing, CPM then learns a person-specific classifier using “virtual labels” provided by confident classifiers. This step is achieved using a quasi-semi-supervised (QSS) approach. Hard samples are typically close to the decision boundary, and the QSS approach disambiguates them using spatio-temporal constraints. To evaluate CPM, we compared it with a baseline single-margin classifier and state-of-the-art semi-supervised learning, transfer learning, and boosting methods in three datasets of spontaneous facial behavior. With few exceptions, CPM outperformed baseline and state-of-the art methods.

BibTeX

@conference{Zeng-2015-6046,
author = {Jiabei Zeng and Wen-Sheng Chu and Fernando De la Torre Frade and Jeffrey Cohn and Zhang Xiong},
title = {Confidence Preserving Machine for Facial Action Unit Detection},
booktitle = {Proceedings of (ICCV) International Conference on Computer Vision},
year = {2015},
month = {December},
pages = {3622 - 3630},
}