/Few-Shot Hash Learning for Image Retrieval

Few-Shot Hash Learning for Image Retrieval

Yuxiong Wang, Liangke Gui and Martial Hebert
Conference Paper, International Conference on Computer Vision (ICCV) Workshops, October, 2017

Download Publication (PDF)

Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author’s copyright. These works may not be reposted without the explicit permission of the copyright holder.


Current approaches to hash based semantic image retrieval assume a set of pre-defined categories and rely on supervised learning from a large number of annotated samples. The need for labeled samples limits their applicability in scenarios in which a user provides at query time a small set of training images defining a customized novel category. This paper addresses the problem of few-shot hash learning, in the spirit of one-shot learning in image recognition and classification and early work on locality sensitive hashing. More precisely, our approach is based on the insight that universal hash functions can be learned off-line from unlabeled data because of the information implicit in the density structure of a discriminative feature space. We can then select a task-specific combination of hash codes for a novel category from a few labeled samples. The resulting unsupervised generic hashing (UGH) significantly outperforms current supervised and unsupervised hashing approaches on image retrieval tasks with small training samples.

BibTeX Reference
author = {Yuxiong Wang and Liangke Gui and Martial Hebert},
title = {Few-Shot Hash Learning for Image Retrieval},
booktitle = {International Conference on Computer Vision (ICCV) Workshops},
year = {2017},
month = {October},