/Incorporating Background Invariance into Feature-Based Object Recognition

Incorporating Background Invariance into Feature-Based Object Recognition

Andrew Stein and Martial Hebert
Conference Paper, Seventh IEEE Workshop on Applications of Computer VIsion (WACV), January, 2005

Download Publication (PDF)

Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author’s copyright. These works may not be reposted without the explicit permission of the copyright holder.


Current feature-based object recognition methods use information derived from local image patches. For robustness, features are engineered for invariance to various transformations, such as rotation, scaling, or affine warping. When patches overlap object boundaries, however, errors in both detection and matching will almost certainly occur due to inclusion of unwanted background pixels. This is common in real images, which often contain significant background clutter, objects which are not heavily textured, or objects which occupy a relatively small portion of the image. We suggest improvements to the popular Scale Invariant Feature Transform (SIFT) which incorporate local object boundary information. The resulting feature detection and descriptor creation processes are invariant to changes in background. We call this method the Background and Scale Invariant Feature Transform (BSIFT). We demonstrate BSIFT’s superior performance in feature detection and matching on synthetic and natural images.

BibTeX Reference
author = {Andrew Stein and Martial Hebert},
title = {Incorporating Background Invariance into Feature-Based Object Recognition},
booktitle = {Seventh IEEE Workshop on Applications of Computer VIsion (WACV)},
year = {2005},
month = {January},
keywords = {object recognition, features, SIFT, BSIFT},