Learning Qualitative Spatial Relations for Robotic Navigation - Robotics Institute Carnegie Mellon University

Learning Qualitative Spatial Relations for Robotic Navigation

Abdeslam Boularias, Felix Duvallet, Jean Hyaejin Oh, and Anthony Stentz
Conference Paper, Proceedings of 25th International Joint Conference on Artificial Intelligence (IJCAI '16), pp. 4130 - 4134, July, 2016

Abstract

We consider the problem of robots following natural language commands through previously unknown outdoor environments. A robot receives commands in natural language, such as “Navigate around the building to the far left of the fire hydrant and near the tree.” The robot needs first to classify its surrounding objects into categories, using images obtained from its sensors. The result of this classification is a map of the environment, where each object is given a list of semantic labels, such as “tree” and “car,” with varying degrees of confidence. Then, the robot needs to ground the nouns in the command, i.e. mapping each noun in the command into a physical object in the environment. The robot needs also to ground a specified navigation mode, such as “navigate quickly” and “navigate covertly,” as a cost map. In this work, we show how to ground nouns and navigation modes by learning from examples provided by humans.

Notes
Associated Project - RCTA

BibTeX

@conference{Oh-2016-103002,
author = {Abdeslam Boularias and Felix Duvallet and Jean Hyaejin Oh and Anthony Stentz},
title = {Learning Qualitative Spatial Relations for Robotic Navigation},
booktitle = {Proceedings of 25th International Joint Conference on Artificial Intelligence (IJCAI '16)},
year = {2016},
month = {July},
pages = {4130 - 4134},
keywords = {semantic navigation, language grounding, imitation learning},
}