An Architecture for Online Semantic Labeling on UGVs

Arne Suppe, Luis Ernesto Navarro-Serment, Daniel Munoz, Drew Bagnell, Arne Suppe, Luis Ernesto Navarro-Serment, Daniel Munoz, J. Andrew (Drew) Bagnell, and Martial Hebert
Proc. SPIE 8741, Unmanned Systems Technology XV, April, 2013.


Download
  • Adobe portable document format (pdf) (3MB)
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract
We describe an architecture to provide online semantic labeling capabilities to field robots operating in urban environments. At the core of our system is the stacked hierarchical classifier developed by Munoz et al.,1 which classifies regions in monocular color images using models derived from hand labeled training data. The classifier is trained to identify buildings, several kinds of hard surfaces, grass, trees, and sky. When taking this algorithm into the real world, practical concerns with difficult and varying lighting conditions require careful control of the imaging process. First, camera exposure is controlled by software, examining all of the image’s pixels, to compensate for the poorly performing, simplistic algorithm used on the camera. Second, by merging multiple images taken with different exposure times, we are able to synthesize images with higher dynamic range than the ones produced by the sensor itself. The sensor’s limited dynamic range makes it difficult to, at the same time, properly expose areas in shadow along with high albedo surfaces that are directly illuminated by the sun. Texture is a key feature used by the classifier, and under/over exposed regions lacking texture are a leading cause of misclassifications. The results of the classifier are shared with higher-lev elements operating in the UGV in order to perform tasks such as building identification from a distance and finding traversable surfaces.

Keywords
Semantic labeling, scene understanding, unmanned vehicles, computer vision

Notes
Sponsor: U.S Army Research Laboratory
Associated Center(s) / Consortia: Vision and Autonomous Systems Center and Field Robotics Center
Associated Lab(s) / Group(s): Vision and Mobile Robotics Lab and NavLab
Associated Project(s): CTA Robotics

Text Reference
Arne Suppe, Luis Ernesto Navarro-Serment, Daniel Munoz, Drew Bagnell, Arne Suppe, Luis Ernesto Navarro-Serment, Daniel Munoz, J. Andrew (Drew) Bagnell, and Martial Hebert, "An Architecture for Online Semantic Labeling on UGVs ," Proc. SPIE 8741, Unmanned Systems Technology XV, April, 2013.

BibTeX Reference
@inproceedings{Suppe_2013_7436,
   author = "Arne Suppe and Luis Ernesto Navarro-Serment and Daniel Munoz and Drew Bagnell and Arne Suppe and Luis Ernesto Navarro-Serment and Daniel Munoz and J. Andrew (Drew) Bagnell and Martial Hebert",
   editor = "Robert E. Karlsen, Douglas W. Gage, Charles M. Shoemaker, Grant R. Gerhart",
   title = "An Architecture for Online Semantic Labeling on UGVs ",
   booktitle = "Proc. SPIE 8741, Unmanned Systems Technology XV",
   month = "April",
   year = "2013",
}