Rotational Rectification Network: Enabling Pedestrian Detection for Mobile Vision - Robotics Institute Carnegie Mellon University

Rotational Rectification Network: Enabling Pedestrian Detection for Mobile Vision

Conference Paper, Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV '18), pp. 1084 - 1092, March, 2018

Abstract

Across a majority of pedestrian detection datasets, it is typically assumed that pedestrians will be standing upright with respect to the image coordinate system. This assumption, however, is not always valid for many vision-equipped mobile platforms such as mobile phones, UAVs or construction vehicles on rugged terrain. In these situations, the motion of the camera can cause images of pedestrians to be captured at extreme angles. This can lead to very poor pedestrian detection performance when using standard pedestrian detectors. To address this issue, we propose aRotational Rectification Network (R2N) that can be inserted into any CNN-based pedestrian (or object) detector to adapt it to significant changes in camera rotation. The rotational rectification network uses a 2D rotation estimation module that passes rotational information to a spatial transformer network to undistort image features. To enable robust rotation estimation, we propose a Global Polar Pooling (GP-Pooling) operator to capture rotational shifts in convolutional features. Through our experiments, we show how our rotational rectification network can be used to improve the performance of the state-of-the-art pedestrian detector under heavy image rotation by up to 45%.

BibTeX

@conference{Weng-2018-102678,
author = {Xinshuo Weng and Shangxuan Wu and Fares Beainy and Kris M. Kitani},
title = {Rotational Rectification Network: Enabling Pedestrian Detection for Mobile Vision},
booktitle = {Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV '18)},
year = {2018},
month = {March},
pages = {1084 - 1092},
}