291

Automatic Image Annotation via Label Transfer in the Semantic Space

Abstract

While most automatic image annotation methods rely solely on visual features, we consider integrating additional information into an unified embedding comprised of visual and textual information. We propose an approach based on Kernel Canonical Correlation Analysis, which builds a latent semantic space where correlation of visual and textual features are well preserved into a semantic embedding. Images in the semantic space have reduced semantic gap and thus they are likely to give better annotation performance. The proposed approach is robust and can work either when the training set is well annotated by experts, as well as when it is noisy such as in the case of user-generated tags in social media. We evaluate our framework on four popular datasets. Our results show that our KCCA-based approach can be applied to several state-of-the-art label transfer methods to obtain significant improvements. In particular, nearest neighbor methods for label transfer get the most benefit and enable our approach to scale on never seen labels at training time. Our approach works even with the noisy tags of social users, provided that appropriate denoising is performed. Experiments on a large scale setting show that our method can provide some benefits even when the semantic space is estimated on a subset of training images.

View on arXiv
Comments on this paper