Unbiased Look at Dataset Bias - Robotics Institute Carnegie Mellon University

Unbiased Look at Dataset Bias

Antonio Torralba and Alexei A. Efros
Conference Paper, Proceedings of (CVPR) Computer Vision and Pattern Recognition, pp. 1521 - 1528, June, 2011

Abstract

Datasets are an integral part of contemporary object recognition research. They have been the chief reason for the considerable progress in the field, not just as source of large amounts of training data, but also as means of measuring and comparing performance of competing algorithms. At the same time, datasets have often been blamed for narrowing the focus of object recognition research, reducing it to a single benchmark performance number. In- deed, some datasets, that started out as data capture efforts aimed at representing the visual world, have become closed worlds unto themselves (e.g. the Corel world, the Caltech-101 world, the PASCAL VOC world). With the focus on beating the latest benchmark numbers on the latest dataset, have we perhaps lost sight of the original purpose? The goal of this paper is to take stock of the current state of recognition datasets. We present a comparison study using a set of popular datasets, evaluated based on a number of criteria including: relative data bias, cross-dataset generalization, effects of closed-world assumption, and sample value. The experimental results, some rather surprising, suggest directions that can improve dataset collection as well as algorithm evaluation protocols. But more broadly, the hope is to stimulate discussion in the community regarding this very important, but largely neglected issue.

BibTeX

@conference{Torralba-2011-7313,
author = {Antonio Torralba and Alexei A. Efros},
title = {Unbiased Look at Dataset Bias},
booktitle = {Proceedings of (CVPR) Computer Vision and Pattern Recognition},
year = {2011},
month = {June},
pages = {1521 - 1528},
}