doctoral dissertation, tech. report CMU-RI-TR-96-14, Robotics Institute, Carnegie Mellon University, January, 1996
|Much progress has been made toward solving the autonomous lane keeping problem using vision based methods. Systems have been demonstrated which can drive robot vehicles at high speed for long distances. The current challenge for vision based on-road navigation researchers is to create systems that maintain the performance of the existing lane keeping systems, while adding the ability to execute tactical level driving tasks like lane transition and intersection detection and navigation.
There are many ways to add tactical functionality to a driving system. Solutions range from developing task specific software modules to grafting additional functionality onto a basic lane keeping system. Solutions like these are problematic because they either make reuse of acquired knowledge difficult or impossible, or preclude the use of alternative lane keeping systems.
A more desirable solution is to develop a robust, lane keeper independent control scheme that provides the functionality to execute tactical actions. Based on this hypothesis, techniques that are used to execute tactical level driving tasks should:
Be based on a single framework that is applicable to a variety of tactical level actions,
Be extensible to other vision based lane keeping systems, and
Require little or no modification of the lane keeping system with which it is being used.
This thesis examines a framework, called Virtual Active Vision, which provides this functionality through intelligent control of the visual information presented to the lane keeping system. Novel solutions based on this framework for two classes of tactical driving tasks, lane transition and intersection detection and traversal, are presented in detail. Specifically, algorithms which allow the ALVINN lane keeping system to robustly execute lane transition maneuvers like lane changing, entrance and exit ramp detection and traversal, and obstacle avoidance are presented. Additionally, with the aid of active camera control, the ALVINN system enhanced with Virtual Active Vision tools can successfully detect and navigate basic road intersections.
Sponsor: DARPA, TACOM, USDOT
Grant ID: DACA76-89-C-0014, DAAE07-90-C-R059, DTNH22-93-C-07023, DTFH61-94-Z-00001
Number of pages: 145
|Todd Jochem, "Vision Based Tactical Driving," doctoral dissertation, tech. report CMU-RI-TR-96-14, Robotics Institute, Carnegie Mellon University, January, 1996|
author = "Todd Jochem",
title = "Vision Based Tactical Driving",
booktitle = "",
school = "Robotics Institute, Carnegie Mellon University",
month = "January",
year = "1996",
address= "Pittsburgh, PA",
|The Robotics Institute is part of the School of Computer Science, Carnegie Mellon University.|
Contact Us | Update Instructions