Visual Sensing for Developing Autonomous Behavior in Snake Robots - Robotics Institute Carnegie Mellon University

Visual Sensing for Developing Autonomous Behavior in Snake Robots

H. Ponte, M. Queenan, C. Mertz, M. Travers, F. Enner, M. Hebert, and H. Choset
Conference Paper, Proceedings of (ICRA) International Conference on Robotics and Automation, pp. 2779 - 2784, May, 2014

Abstract

Snake robots are uniquely qualified to investigate a large variety of settings including archaeological sites, natural disaster zones, and nuclear power plants. For these applications, modular snake robots have been tele-operated to perform specific tasks using images returned to it from an onboard camera in the robots head. In order to give the operator an even richer view of the environment and to enable the robot to perform autonomous tasks we developed a structured light sensor that can make three-dimensional maps of the environment. This paper presents a sensor that is uniquely qualified to meet the severe constraints in size, power and computational footprint of snake robots. Using range data, in the form of 3D pointclouds, we show that it is possible to pair high-level planning with mid-level control to accomplish complex tasks without operator intervention.

BibTeX

@conference{Ponte-2014-107827,
author = {H. Ponte and M. Queenan and C. Mertz and M. Travers and F. Enner and M. Hebert and H. Choset},
title = {Visual Sensing for Developing Autonomous Behavior in Snake Robots},
booktitle = {Proceedings of (ICRA) International Conference on Robotics and Automation},
year = {2014},
month = {May},
pages = {2779 - 2784},
}