Guided Open Vocabulary Image Captioning with Constrained Beam Search
Existing image captioning models do not generalize well to out-of-domain images containing novel scenes or objects. This limitation severely hinders the use of these models in real world applications dealing with images in the wild. We address this problem using a flexible approach that enables existing deep captioning architectures to take advantage of image taggers at test time, without re-training. Our method uses constrained beam search to force the inclusion of selected tag words in the output, and fixed, pretrained word embeddings to facilitate vocabulary expansion to previously unseen tag words. Using this approach we achieve state of the art results for out-of-domain captioning on MS COCO (and improved results for in-domain captioning). In order to demonstrate the scalability of our approach, we generate and publicly release captions for the complete ImageNet classification dataset containing 1.2M images. Each ImageNet caption includes the ground-truth image label. Human evaluations indicate that 27% of the resulting captions are likely to meet or exceed human quality (increasing to 38% for certain categories such as birds).
View on arXiv