Natural Language Based Multimodal Interface for UAV Mission Planning - Robotics Institute Carnegie Mellon University

Natural Language Based Multimodal Interface for UAV Mission Planning

Meghan Chandarana, Erica L. Meszaros, Anna Trujillo, and B. Danette Allen
Conference Paper, Proceedings of 61st Human Factors and Ergonomics Society Meeting (HFES '17), No. 1, pp. 68 - 72, September, 2017

Abstract

As the number of viable applications for unmanned aerial vehicle (UAV) systems increases at an exponential rate, interfaces that reduce the reliance on highly skilled engineers and pilots must be developed. Recent work aims to make use of common human communication modalities such as speech and gesture. This paper explores a multimodal natural language interface that uses a combination of speech and gesture input modalities to build complex UAV flight paths by defining trajectory segment primitives. Gesture inputs are used to define the general shape of a segment while speech inputs provide additional geometric information needed to fully characterize a trajectory segment. A user study is conducted in order to evaluate the efficacy of the multimodal interface.

BibTeX

@conference{Chandarana-2017-107682,
author = {Meghan Chandarana and Erica L. Meszaros and Anna Trujillo and B. Danette Allen},
title = {Natural Language Based Multimodal Interface for UAV Mission Planning},
booktitle = {Proceedings of 61st Human Factors and Ergonomics Society Meeting (HFES '17)},
year = {2017},
month = {September},
pages = {68 - 72},
}