Advanced Search   
  Look in
       Title     Description
  Include
       Inactive Projects
 
Supervised TeleRobotics using Incremental Polyhedral-Earth geometry (STRIPE)
This project is no longer active.
Head: Chuck Thorpe
Mailing address:
Carnegie Mellon University
Robotics Institute
5000 Forbes Avenue
Pittsburgh, PA 15213
Associated center(s) / consortia:
 Vision and Autonomous Systems Center (VASC)
Associated lab(s) / group(s):
 NavLab
Overview
Supervised TeleRobotics using Incremental Polyhedral-Earth geometry (STRIPE) is a system for vehicle teleoperation across low bandwidth links and links with transmission delays.

Driving a vehicle, either directly or remotely, is an inherently visual task. When heavy fog limits visibility, safe drivers reduce their car's speed to a slow crawl, even along very familiar roads. In teleoperation systems, an operator's view is limited to data provided by one or more cameras mounted on the remote vehicle. Traditional vehicle teleoperation systems require real-time transmission of a continuous stream of images from the vehicle to the operator workstation. The operator views the scene on one or more monitors, and controls the vehicle from a car-like console. The bandwidth necessary to transmit the images to the operator workstation is very large, about 5MB of data per second for high resolution monochrome images.

Image transmission can be delayed for a variety of reasons such as large distances between the base station and the vehicle (e.g. the vehicle is on Mars) and low bandwidth transmission links (e.g. non-line-of-sight radio links). As the delay between images increases, an operator's ability to accurately teleoperate a vehicle in the traditional manner rapidly decreases. If there are several seconds between images, the visual feedback that the operator needs to steer accurately is simply not available.

In STRIPE the low-level steering details are left to the vehicle. The operator indicates the high level directions (e.g. "go up the road and turn right") by using a mouse to pick a series of points in the image (known as "waypoints"), which indicate the desired path. The vehicle moves along the designated path while the operator waits for the next image to arrive.

In order to compute the appropriate steering direction, the STRIPE module on the vehicle must convert the 2D path in the image into a 3D path in the real world. Simple flat-earth techniques, in which all of the world points are constrained to lie on a single plane, are not sufficient to enable the vehicle to steer itself correctly when the path to be traversed is non-planar. In STRIPE, the 2D waypoints are transmitted to the vehicle, and are initially projected onto the vehicle's current groundplane. The resulting 3D waypoints are used to initiate steering of the vehicle, and it begins to move. Several times a second, the vehicle re-estimates the location of its current groundplane by measuring vehicle position and orientation. The original image waypoints are then projected onto the new groundplane to produce new 3D waypoints, and the steering direction is adjusted appropriately. This reproject-and-drive procedure is repeated until the last waypoint is reached, or new waypoints are received.

STRIPE has no advance knowledge of the 3D locations of all of the waypoints. However, as the vehicle approaches a particular waypoint, the vehicle's groundplane becomes an increasingly accurate approximation for the plane that the waypoint lies on. By the time the vehicle needs to steer based on that particular waypoint, it has a precise knowledge of where that point lies in the 3D world.