/Toward Autonomous Driving: The CMU Navlab. Part I: Perception

Toward Autonomous Driving: The CMU Navlab. Part I: Perception

Chuck Thorpe, Martial Hebert, Takeo Kanade and Steven Shafer
Journal Article, IEEE Expert, Vol. 6, No. 4, pp. 31 - 42, August, 1991

Download Publication (PDF)

Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author’s copyright. These works may not be reposted without the explicit permission of the copyright holder.


The Navlab project, which seeks to build an autonomous robot that can operate in a realistic environment with bad weather, bad lighting, and bad or changing roads, is discussed. The perception techniques developed for the Navlab include road-following techniques using color classification and neural nets. These are discussed with reference to three road-following systems, SCARF, YARF, and ALVINN. Three-dimensional perception using three types of terrain representation (obstacle maps, terrain feature maps, and high-resolution maps) is examined. It is noted that perception continues to be an obstacle in developing autonomous vehicles. This work is part of the Defense Advanced Research Project Agency. Strategic Computing Initiative.

BibTeX Reference
author = {Chuck Thorpe and Martial Hebert and Takeo Kanade and Steven Shafer},
title = {Toward Autonomous Driving: The CMU Navlab. Part I: Perception},
journal = {IEEE Expert},
year = {1991},
month = {August},
volume = {6},
number = {4},
pages = {31 - 42},