ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1605.09186
20
85

Does Multimodality Help Human and Machine for Translation and Image Captioning?

30 May 2016
Ozan Caglayan
Walid Aransa
Yaxing Wang
Marc Masana
Mercedes García-Martínez
Fethi Bougares
Loïc Barrault
Joost van de Weijer
ArXivPDFHTML
Abstract

This paper presents the systems developed by LIUM and CVC for the WMT16 Multimodal Machine Translation challenge. We explored various comparative methods, namely phrase-based systems and attentional recurrent neural networks models trained using monomodal or multimodal data. We also performed a human evaluation in order to estimate the usefulness of multimodal data for human machine translation and image description generation. Our systems obtained the best results for both tasks according to the automatic evaluation metrics BLEU and METEOR.

View on arXiv
Comments on this paper