The Cohn-Kanade AU-Coded Facial Expression Database affords a test bed for research in automatic facial image analysis and is available for use by the research community. Image data consist of approximately 500 image sequences from 100 subjects. Accompanying meta-data include annotation of FACS action units and emotion-specified expressions. Algorithm performance in Cohn-Kanade was first reported by Cohn, Zlochower, Lien, & Kanade (1999) and Tian, Kanade, and Cohn (2001). For more information on the Cohn-Kanade database and the expanded database (CK+), and to download the agreement form, please visit the database website.
Subjects range in age from 18 to 30 years. Sixty-five percent were female; 15 percent were African-American and three percent Asian or Latino. The observation room was equipped with a chair for the subject and two Panasonic WV3230 cameras, each connected to a Panasonic S-VHS AG-7500 video recorder with a Horita synchronized time-code generator. One of the cameras was located directly in front of the subject, and the other was positioned 30 degrees to the subject's right. Only image data from the frontal camera are available at this time.
Subjects were instructed by an experimenter to perform a series of 23 facial displays that included single action units (e.g., AU 12, or lip corners pulled obliquely) and action unit combinations (e.g., AU 1+2, or inner and outer brows raised). Each begins from a neutral or nearly neutral face. For each, an experimenter described and modeled the target display. Six were based on descriptions of prototypic emotions (i.e., joy, surprise, anger, fear, disgust, and sadness). These six tasks and mouth opening in the absence of other action units were annotated by certified FACS coders. Seventeen percent of the data were comparison annotated. Inter-observer agreement was quantified with coefficient kappa, which is the proportion of agreement above what would be expected to occur by chance (Cohen, 1960; Fleiss, 1981). The mean kappa for inter-observer agreement was 0.86. Image sequences from neutral to target display were digitized into 640 by 480 or 490 pixel arrays with 8-bit precision for grayscale values.
|The Robotics Institute is part of the School of Computer Science, Carnegie Mellon University.|
Contact Us | Update Instructions