A Human-Assisted Approach for a Mobile Robot to Learn 3D Object Models using Active Vision - Robotics Institute Carnegie Mellon University

A Human-Assisted Approach for a Mobile Robot to Learn 3D Object Models using Active Vision

Matthijs Jan Zwinderman, Paul Rybski, and Gert Kootstra
Conference Paper, Proceedings of 19th IEEE International Symposium in Robot and Human Interactive Communication (RO-MAN '10), pp. 397 - 403, September, 2010

Abstract

In this paper we present an algorithm that allows a human to naturally and easily teach a mobile robot how to recognize objects in its environment. The human selects the object by pointing at it using a laser pointer. The robot recognizes the laser reflections with its cameras and uses this data to generate an initial 2D segmentation of the object. The 3D position of SURF feature points are extracted from the designated area using stereo vision. As the robot moves around the object, new views of the object are obtained from which feature points are extracted. These features are filtered using active vision. The complete object representation consists of feature points registered with 3D pose data. We describe the method and show that it works well by performing experiments on real world data collected with our robot. We use an extensive dataset of 21 objects, differing in size, shape and texture.

BibTeX

@conference{Zwinderman-2010-10526,
author = {Matthijs Jan Zwinderman and Paul Rybski and Gert Kootstra},
title = {A Human-Assisted Approach for a Mobile Robot to Learn 3D Object Models using Active Vision},
booktitle = {Proceedings of 19th IEEE International Symposium in Robot and Human Interactive Communication (RO-MAN '10)},
year = {2010},
month = {September},
pages = {397 - 403},
}