Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1808.08266
Cited By
A Visual Attention Grounding Neural Model for Multimodal Machine Translation
24 August 2018
Mingyang Zhou
Runxiang Cheng
Yong Jae Lee
Zhou Yu
Re-assign community
ArXiv
PDF
HTML
Papers citing
"A Visual Attention Grounding Neural Model for Multimodal Machine Translation"
19 / 19 papers shown
Title
Detecting Concrete Visual Tokens for Multimodal Machine Translation
Braeden Bowen
Vipin Vijayan
Scott Grigsby
Timothy Anderson
Jeremy Gwinnup
34
2
0
05 Mar 2024
Impact of Visual Context on Noisy Multimodal NMT: An Empirical Study for English to Indian Languages
Baban Gain
Dibyanayan Bandyopadhyay
Subhabrata Mukherjee
Chandranath Adak
Asif Ekbal
33
2
0
30 Aug 2023
Low-resource Neural Machine Translation with Cross-modal Alignment
Zhe Yang
Qingkai Fang
Yang Feng
VLM
37
9
0
13 Oct 2022
VALHALLA: Visual Hallucination for Machine Translation
Yi Li
Yikang Shen
Yoon Kim
Chun-Fu Chen
Rogerio Feris
David D. Cox
Nuno Vasconcelos
MLLM
40
38
0
31 May 2022
Utilizing Language-Image Pretraining for Efficient and Robust Bilingual Word Alignment
Tuan Dinh
Jy-yong Sohn
Shashank Rajput
Timothy Ossowski
Yifei Ming
Junjie Hu
Dimitris Papailiopoulos
Kangwook Lee
28
0
0
23 May 2022
Neural Machine Translation with Phrase-Level Universal Visual Representations
Qingkai Fang
Yang Feng
33
40
0
19 Mar 2022
Product-oriented Machine Translation with Cross-modal Cross-lingual Pre-training
Yuqing Song
Shizhe Chen
Qin Jin
Wei Luo
Jun Xie
Fei Huang
24
18
0
25 Aug 2021
A Survey on Low-Resource Neural Machine Translation
Rui Wang
Xu Tan
Renqian Luo
Tao Qin
Tie-Yan Liu
3DV
33
58
0
09 Jul 2021
Good for Misconceived Reasons: An Empirical Revisiting on the Need for Visual Context in Multimodal Machine Translation
Zhiyong Wu
Lingpeng Kong
W. Bi
Xiang Li
B. Kao
LRM
23
77
0
30 May 2021
UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training
Mingyang Zhou
Luowei Zhou
Shuohang Wang
Yu Cheng
Linjie Li
Zhou Yu
Jingjing Liu
MLLM
VLM
31
89
0
01 Apr 2021
Gumbel-Attention for Multi-modal Machine Translation
Pengbo Liu
Hailong Cao
T. Zhao
26
23
0
16 Mar 2021
WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning
Krishna Srinivasan
K. Raman
Jiecao Chen
Michael Bendersky
Marc Najork
VLM
210
310
0
02 Mar 2021
A Novel Graph-based Multi-modal Fusion Encoder for Neural Machine Translation
Yongjing Yin
Fandong Meng
Jinsong Su
Chulun Zhou
Zhengyuan Yang
Jie Zhou
Jiebo Luo
35
139
0
17 Jul 2020
Unsupervised Multimodal Neural Machine Translation with Pseudo Visual Pivoting
Po-Yao (Bernie) Huang
Junjie Hu
Xiaojun Chang
Alexander G. Hauptmann
36
50
0
06 May 2020
Visual Agreement Regularized Training for Multi-Modal Machine Translation
Pengcheng Yang
Boxing Chen
Pei Zhang
Xu Sun
82
30
0
27 Dec 2019
Multimodal Machine Translation through Visuals and Speech
U. Sulubacak
Ozan Caglayan
Stig-Arne Gronroos
Aku Rouhe
Desmond Elliott
Lucia Specia
Jörg Tiedemann
49
73
0
28 Nov 2019
Trends in Integration of Vision and Language Research: A Survey of Tasks, Datasets, and Methods
Aditya Mogadala
M. Kalimuthu
Dietrich Klakow
VLM
25
132
0
22 Jul 2019
Hindi Visual Genome: A Dataset for Multimodal English-to-Hindi Machine Translation
Shantipriya Parida
Ondrej Bojar
S. Dash
33
62
0
21 Jul 2019
From Words to Sentences: A Progressive Learning Approach for Zero-resource Machine Translation with Visual Pivots
Shizhe Chen
Qin Jin
Jianlong Fu
23
16
0
03 Jun 2019
1