Home/Passive Driver Gaze Tracking with Active Appearance Models

Passive Driver Gaze Tracking with Active Appearance Models

Takahiro Ishikawa, Simon Baker, Iain Matthews and Takeo Kanade
Tech. Report, CMU-RI-TR-04-08, Robotics Institute, Carnegie Mellon University, February, 2004

View Publication

Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.


Passive gaze estimation is usually performed by locating the pupils, and the inner and outer eye corners in the image of the driver’s head. Of these feature points, the eye corners are just as important, and perhaps harder to detect, than the pupils. The eye corners are usually found using local feature detectors and trackers. In this paper, we describe a passive driver gaze tracking system which uses a global head model, specifically an Active Appearance Model (AAM), to track the whole head. From the AAM, the eye corners, eye region, and head pose are robustly extracted and then used to estimate the gaze.

author = {Takahiro Ishikawa and Simon Baker and Iain Matthews and Takeo Kanade},
title = {Passive Driver Gaze Tracking with Active Appearance Models},
year = {2004},
month = {February},
institution = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-04-08},
keywords = {Gaze estimation, driver gaze tracking, Active Appearance Models},
} 2017-09-13T10:44:11-04:00