Image Caption Generation with Text-Conditional Semantic Attention
- VLM
We propose a semantic attention mechanism for image caption generation, called text-conditional semantic attention, which allows the caption generator to automatically learn which parts of the image feature to focus on given previously generated text. To acquire text-related image features for our attention model, we also improve the guiding Long Short-Term Memory (gLSTM) structure by back-propagating the training loss though semantic guidance to fine-tune the CNN weights. In contrast to existing gLSTM methods, such as emb-gLSTM, our fine-tuned model enables guidance information to be more text-related. This also allows jointly learning of the image embedding, text embedding, semantic attention and language model with one simple network architecture in an end-to-end manner. We implement our model based on NeuralTalk2, an open-source image caption generator, and test it on MSCOCO dataset. We evaluate the proposed method with three metrics: BLEU, METEOR and CIDEr. The proposed methods outperform state-of-the-art methods.
View on arXiv