This page is provided for historical and archival purposes only. While the seminar dates are correct, we offer no guarantee of informational accuracy or link validity. Contact information for the speakers, hosts and seminar committee are certainly out of date.
The Robotics Institute Carnegie Mellon University
This talk will present results obtained in the area of autonomous navigation using range data. The work is illustrated by demonstrations on the CMU HMMWV using a laser range finder. The problem is to control the trajectory of a vehicle through an unknown terrain based on the analysis of range image. The goal is to achieved continuous motion at moderate speeds.
The first approach that will discuss involves bucketizing the 3-D points calculated from the range image, deciding which of the cells in the resulting grid are navigable, and passing them to a map management and planning module before issuing commands to the vehicle. I will show a video of a 1km traverse of a cross-country area. I will focus of the perception part of the system; the other two parts, local map management and path generation use the work of Dirk Langer and Julio Rosenblatt, respectively.
This first approach is simple in its principle and reasonably efficient in its operation. However, it has a number of drawbacks which limit both its performance and its generality. The main problem is that the algorithm is fundamentally image-based in that the entire range image is processed as a unit. As a result, the system suffers from significant latency and is not easily generalizable to other types of range sensors, e.g., single-scanline sensors.
To address these limitations, I will describe an alternative approach in which each data point is processed individually and independently of the rest of the image. In this approach, a set of state values is maintain at every point in the environment and is updated at the appropriate point whenever a new measurement point is available. State values may include elevations, slopes, and uncertainty measures. I will show how the state can be updated efficiently when a new pixel is added, and how measures of uncertainty and confidence can be maintained. I will also show how the processing of the range data points can be interleaved with the generation of driving commands for greater efficiency. This approach has two advantages. First, it reduces the latency in the system by using the parts of the terrain in which enough measurements have been made as soon as they become available, and by issuing driving commands at regular intervals independently of the scanning rate of the sensor. Second, it is applicable to any range sensor independently of its geometry. I will show how the first system can be modified, including the map management and the arc generation, in order to accommodate the point processing approach. I will show results obtained using the current laser range finder as well as using a single-scanline range finder simulated from the imaging scanner.
I will conclude the talk by discussing the fundamental limitations of the systems. I will relate many of these limitations to the complexity analysis of the off-road autonomous driving problem which Alonzo Kelly described at the Seminar a couple of weeks ago. I will also briefly mention some of the current extensions including interfacing with other more global planning modules, and eliminating the need for 3-D reconstruction.
Host: Yangsheng Xu (firstname.lastname@example.org) Appointment: Lalit Katragadda (email@example.com)