v1v2 (latest)
Improving LSTM-based Video Description with Linguistic Knowledge Mined
from Text
- VLM
Abstract
This paper investigates how linguistic knowledge mined from large text corpora can aid the generation of natural language descriptions of videos. Specifically, we integrate both a neural language model and distributional semantics trained on large text corpora into a recent LSTM-based architecture for video description. We evaluate our approach on a collection of Youtube videos as well as two large movie description datasets showing significant improvements in grammaticality while modestly improving descriptive quality.
View on arXivComments on this paper
