286

Webly Supervised Learning of Convolutional Networks

Abstract

In the last few years, we have made enormous progress in learning visual representations via convolutional neural networks (CNNs). We believe CNNs get their edge due to their ability to imbibe large amounts of data. Therefore, as we move forward a key question arises: how do we move from million image datasets to billion image counterparts? Do we continue to manually label images with the hope of scaling up the labeling to billion images? It is in this context that webly supervised learning assumes huge importance: if we can exploit the images on the web for training CNNs without manually labeling them, it will be a win-win for everyone. We present a simple yet powerful approach to exploit web data for learning CNNs. Specifically, inspired by curriculum learning algorithms, we present a two-step approach for learning CNNs. First, we use simple, easy images to train an initial visual representation. We then use this initial CNN and adapt it to harder Flickr style scene images by exploiting the structure of data and categories (using a relationship graph). We demonstrate that our two-stage CNN performs very competitively to the ImageNet pretrained network architecture for object detection without even using a single ImageNet training label. We also demonstrate the strength of webly supervised learning by localizing objects in web images and training a R-CNN style detector. To the best of our knowledge, we show the best performance on VOC 2007 where no VOC training data is used.

View on arXiv
Comments on this paper