155

Rotational Rectification Network for Robust Pedestrian Detection

Abstract

Pedestrian detection performance has steadily improved on a variety of benchmark datasets such as Caltech, KITTI, INRIA and ETH, since the resurgence of deep neural networks. Across a majority of pedestrian datasets, it is assumed that pedestrians will be standing upright with respect to the image coordinate system. This assumption however, is not always valid for many vision-equipped mobile platforms such as mobile phones, UAVs or construction vehicles on rugged terrain. In these situations, the motion of the camera can cause images of pedestrians to be captured at extreme angles and this can lead to very poor pedestrian detection performance when using standard pedestrian detectors. To address this issue, we propose a Rotational Rectification Network (R2N) that can be inserted into any CNN-based pedestrian (or object) detector to maintain performance despite significant rotational variance. The rotational rectification network uses a rotation estimation module that passes rotational information to a spatial transformer network \cite{Jaderberg2015} to undistort image features and passes them downstream. To enable robust rotation estimation, we propose a Global Polar Pooling (GP-Pooling) operator to capture rotational shifts in convolutional features. Through our experiments, we show how our rotational rectification network can be used to enhance the performance of a state-of-the-art pedestrian detector under heavy image rotation.

View on arXiv
Comments on this paper