Cutting through the Clutter: Task-Relevant Features for Image Matching - Robotics Institute Carnegie Mellon University

Cutting through the Clutter: Task-Relevant Features for Image Matching

Conference Paper, Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV '16), March, 2016

Abstract

Where do we focus our attention in an image? Humans have an amazing ability to cut through the clutter to the parts of an image most relevant to the task at hand. Consider the task of geo-localizing tourist photos by retrieving other images taken at that location. Such photos naturally contain friends and family, and perhaps might even be nearly filled by a person's face if it is a selfie. Humans have no trouble ignoring these `distractions' and recognizing the parts that are indicative of location (e.g., the towers of Neuschwanstein Castle instead of their friend's face, a tree, or a car). In this paper, we investigate learning this ability automatically. At training-time, we learn how informative a region is for localization. At test-time, we use this learned model to determine what parts of a query image to use for retrieval. We introduce a new dataset, People at Landmarks, that contains large amounts of clutter in query images. Our system is able to outperform the existing state of the art approach to retrieval by more than 10% mAP, as well as improve results on a standard dataset without heavy occluders (Oxford5K).

BibTeX

@conference{Girdhar-2016-109810,
author = {Rohit Girdhar and David Fouhey and Kris M. Kitani and Abhinav Gupta and Martial Hebert},
title = {Cutting through the Clutter: Task-Relevant Features for Image Matching},
booktitle = {Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV '16)},
year = {2016},
month = {March},
}