Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1808.08946
Cited By
Why Self-Attention? A Targeted Evaluation of Neural Machine Translation Architectures
27 August 2018
Gongbo Tang
Mathias Müller
Annette Rios Gonzales
Rico Sennrich
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Why Self-Attention? A Targeted Evaluation of Neural Machine Translation Architectures"
18 / 18 papers shown
Title
Interpretable Emergent Language Using Inter-Agent Transformers
Mannan Bhardwaj
AI4CE
88
0
0
04 May 2025
Hashing it Out: Predicting Unhealthy Conversations on Twitter
Steven Leung
Filippos Papapolyzos
18
1
0
17 Nov 2023
MRTNet: Multi-Resolution Temporal Network for Video Sentence Grounding
Wei Ji
Long Chen
Yin-wei Wei
Yiming Wu
Tat-Seng Chua
AI4TS
27
18
0
26 Dec 2022
Convolution-enhanced Evolving Attention Networks
Yujing Wang
Yaming Yang
Zhuowan Li
Jiangang Bai
Mingliang Zhang
Xiangtai Li
J. Yu
Ce Zhang
Gao Huang
Yu Tong
ViT
24
6
0
16 Dec 2022
CAT-Net: A Cross-Slice Attention Transformer Model for Prostate Zonal Segmentation in MRI
A. Hung
Haoxin Zheng
Qi Miao
S. Raman
D. Terzopoulos
Kyunghyun Sung
ViT
MedIm
19
44
0
29 Mar 2022
Molformer: Motif-based Transformer on 3D Heterogeneous Molecular Graphs
Fang Wu
Dragomir R. Radev
Huabin Xing
ViT
36
54
0
04 Oct 2021
Fine Grained Human Evaluation for English-to-Chinese Machine Translation: A Case Study on Scientific Text
Ming Liu
Heng Zhang
Guanhao Wu
28
1
0
13 Sep 2021
Contrastive Learning for Context-aware Neural Machine TranslationUsing Coreference Information
Yong-keun Hwang
Hyungu Yun
Kyomin Jung
25
11
0
13 Sep 2021
Fixed Encoder Self-Attention Patterns in Transformer-Based Machine Translation
Alessandro Raganato
Yves Scherrer
Jörg Tiedemann
24
92
0
24 Feb 2020
Weakly-Supervised Video Moment Retrieval via Semantic Completion Network
Zhijie Lin
Zhou Zhao
Zhu Zhang
Qi. Wang
Huasheng Liu
22
149
0
19 Nov 2019
SesameBERT: Attention for Anywhere
Ta-Chun Su
Hsiang-Chih Cheng
28
7
0
08 Oct 2019
DuTongChuan: Context-aware Translation Model for Simultaneous Interpreting
Hao Xiong
Ruiqing Zhang
Chuanqiang Zhang
Zhongjun He
Hua-Hong Wu
Haifeng Wang
33
25
0
30 Jul 2019
Investigating Self-Attention Network for Chinese Word Segmentation
Leilei Gan
Yue Zhang
13
11
0
26 Jul 2019
CNNs found to jump around more skillfully than RNNs: Compositional generalization in seq2seq convolutional networks
Roberto Dessì
Marco Baroni
UQCV
16
43
0
21 May 2019
Linguistic Knowledge and Transferability of Contextual Representations
Nelson F. Liu
Matt Gardner
Yonatan Belinkov
Matthew E. Peters
Noah A. Smith
52
717
0
21 Mar 2019
Context-Aware Self-Attention Networks
Baosong Yang
Jian Li
Derek F. Wong
Lidia S. Chao
Xing Wang
Zhaopeng Tu
31
113
0
15 Feb 2019
An Analysis of Attention Mechanisms: The Case of Word Sense Disambiguation in Neural Machine Translation
Gongbo Tang
Rico Sennrich
Joakim Nivre
19
84
0
17 Oct 2018
Effective Approaches to Attention-based Neural Machine Translation
Thang Luong
Hieu H. Pham
Christopher D. Manning
218
7,926
0
17 Aug 2015
1