Few-Shot Hash Learning for Image Retrieval - Robotics Institute Carnegie Mellon University

Few-Shot Hash Learning for Image Retrieval

Yuxiong Wang, Liangke Gui, and Martial Hebert
Workshop Paper, ICCV '17 Workshops, pp. 1228 - 1237, October, 2017

Abstract

Current approaches to hash based semantic image retrieval assume a set of pre-defined categories and rely on supervised learning from a large number of annotated samples. The need for labeled samples limits their applicability in scenarios in which a user provides at query time a small set of training images defining a customized novel category. This paper addresses the problem of few-shot hash learning, in the spirit of one-shot learning in image recognition and classification and early work on locality sensitive hashing. More precisely, our approach is based on the insight that universal hash functions can be learned off-line from unlabeled data because of the information implicit in the density structure of a discriminative feature space. We can then select a task-specific combination of hash codes for a novel category from a few labeled samples. The resulting unsupervised generic hashing (UGH) significantly outperforms current supervised and unsupervised hashing approaches on image retrieval tasks with small training samples.

BibTeX

@workshop{Wang-2017-103523,
author = {Yuxiong Wang and Liangke Gui and Martial Hebert},
title = {Few-Shot Hash Learning for Image Retrieval},
booktitle = {Proceedings of ICCV '17 Workshops},
year = {2017},
month = {October},
pages = {1228 - 1237},
}