255

Bootstrap, Review, Decode: Using Out-of-Domain Textual Data to Improve Image Captioning

Abstract

We propose a novel way of using out-of-domain textual data to enhance the performance of existing image captioning systems. We evaluate this learning approach on a newly designed model that uses - and improves upon - building blocks from state-of-the-art methods. This model starts from detecting visual concepts present in an image which are then fed to a reviewer-decoder architecture with an attention mechanism. Unlike previous approaches that encode visual concepts using word embeddings, we instead suggest using regional image features which capture more intrinsic information. The main benefit of this architecture is that it synthesizes meaningful thought vectors that capture salient image properties and then applies a soft attentive decoder to decode the thought vectors and generate image captions. We evaluate our model on both Microsoft COCO and Flickr30K datasets and demonstrate that this model combined with our bootstrap learning method can largely improve performance and help the model to generate more accurate and diverse captions.

View on arXiv
Comments on this paper