In the recent past, algorithms based on Convolutional Neural Networks (CNNs) have achieved significant milestones in object recognition. With large examples of each object class, standard datasets train well for inter-class variability. However, gathering sufficient data to train for a particular instance of an object within a class is impractical. Furthermore, quantitatively assessing the imaging conditions for each image in a given dataset is not feasible. By generating sufficient images with known imaging conditions, we study to what extent CNNs can cope with hard imaging conditions for instance-level recognition in an active learning regime. Leveraging powerful rendering techniques to achieve instance-level detection, we present results of training three state-of-the-art object detection algorithms namely, Fast R-CNN, Faster R-CNN and YOLO9000, for hard imaging conditions imposed into the scene by rendering. Our extensive experiments produce a mean Average Precision score of 0.92 on synthetic images and 0.83 on real images using the best performing Faster R-CNN. We show for the first time how well detection algorithms based on deep architectures fare for each hard imaging condition studied.
View on arXiv