Autonomous manipulation in natural environments, in which few constraints exist on the geometry of the objects to be manipulated, is becoming increasingly important. Its potential applications include sample collection for planetary exploration and automated excavation.
The challenge is to be able to deal with many completely different situations (terrain configuration, object shape, etc.) that are encountered in the course of the mission of a single robot. Furthermore, the robot should be as autonomous as possible to avoid some of the drawbacks of teleoperation. In particular, it should be able to build models of its environment that are relevant to the task without requiring extensive expert knowledge from an operator.
We are studying the problem of perception and manipulation in natural environments in the context of the CMU Ambler, a six-legged machine for planetary exploration. In this case, the task is to collect samples such as small rocks on the surface of the terrain. The task involves extracting the potential samples from visual data, building models of their shapes, and using the models to pick up and store the sample.
We are developing a set of perception modules for this task. All the perception modules currently use range images. The perception modules include: feature detection range shadow analysis based on sensor geometry segmentation by deformable contours (or "snakes") representation by superquadric surfaces segmentation and representation by deformable surfaces ("3-D snakes") matching and merging of data acquired from different viewpoints. Using those modules, we have built a system that manipulates natural objects (rocks) that are partially buried in soft material (sand) using a clam-shell gripper. Using the same approach, we are developing a system that manipulates natural objects of unknown shapes in a cluttered stack of objects. To test the system we use a testbed that includes a range finder, a robot arm, a gripper, and a terrain mockup.
We are integrating the perception modules into a system in which perception and manipulation strtegies are selected from the analysis of a task defined by an operator. The task description includes the type of manipulation operation to be performed, the type of environment, and a region in the world in which the system should operate as defined by an operator. Once the selected sequence of perception operations is executed, the object can be manipulated using the representation built by the perception system. The techniques developed on this sampling testbed will be used in other robotic systems that operate in natural environments.
|The Robotics Institute is part of the School of Computer Science, Carnegie Mellon University.|
Contact Us | Update Instructions