Improving Object Detection with Inverted Attention - Robotics Institute Carnegie Mellon University

Improving Object Detection with Inverted Attention

Zeyi Huang, Wei Ke, and Dong Huang
Conference Paper, Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV '20), pp. 1294 - 1302, March, 2020

Abstract

Improving object detectors against occlusion, blur and noise is a critical step to deploy detectors in real applications. Since it is not possible to exhaust all image defects and occlusions through data collection, many researchers seek to generate occluded samples. The generated hard samples are either images or feature maps with coarse patches dropped out in the spatial dimensions. Significant overheads are required in generating hard samples and/or estimating drop-out patches using extra network branches. In this paper, we improve object detectors using a highly efficient and fine-grain mechanism called Inverted Attention (IA). Different from the original detector network that only focuses on the dominant part of objects, the detector network with IA iteratively inverts attention on feature maps which pushes the detector to discover new discriminative clues and puts more attention on complementary object parts, feature channels and even context. Our approach (1) discovers features along both the spatial and channels dimensions of the feature maps; (2) requires no extra training on hard samples, no extra network parameters for attention estimation, and no testing overheads. Experiments show that our approach consistently improved state-of-the-art detectors on benchmark databases.

BibTeX

@conference{Huang-2020-117796,
author = {Zeyi Huang and Wei Ke and Dong Huang},
title = {Improving Object Detection with Inverted Attention},
booktitle = {Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV '20)},
year = {2020},
month = {March},
pages = {1294 - 1302},
}