Do semantic parts emerge in Convolutional Neural Networks?
Semantic object parts can be useful for several visual recognition tasks. Lately, these tasks have been addressed using Convolutional Neural Networks (CNN), achieving outstanding results. In this work we study whether CNNs learn semantic parts in their internal representation. We investigate the responses of convolutional filters and try to associate their stimuli with semantic parts. While previous efforts [1,2,3,4] studied this matter by visual inspection, we perform an extensive quantitative analysis based on ground-truth part bounding-boxes, exploring different layers, network depths, and supervision levels. Even after assisting the filters with several mechanisms to favor this association, we find that only about 25 percent of the semantic parts in PASCAL Part dataset [5] emerge in the popular AlexNet [6] network finetuned for object detection [7]. Interestingly, both the supervision level and the network depth do not seem to significantly affect the emergence of parts. Finally, we investigate if filters are responding to recurrent discriminative patches as opposed to semantic parts. We discover that the discriminative power of the network can be attributed to a few discriminative filters specialized to each object class. Moreover, about 60 percent of them can be associated with semantic parts. The overlap between discriminative and semantic filters might be the reason why previous studies suggested a stronger emergence of semantic parts, based on visual inspection only.
View on arXiv