ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1709.04849
  4. Cited By
Self-Attentive Residual Decoder for Neural Machine Translation
v1v2v3v4v5 (latest)

Self-Attentive Residual Decoder for Neural Machine Translation

14 September 2017
Lesly Miculicich
Nikolaos Pappas
Dhananjay Ram
Andrei Popescu-Belis
ArXiv (abs)PDFHTML

Papers citing "Self-Attentive Residual Decoder for Neural Machine Translation"

7 / 7 papers shown
Title
$N$-gram Is Back: Residual Learning of Neural Text Generation with
  $n$-gram Language Model
NNN-gram Is Back: Residual Learning of Neural Text Generation with nnn-gram Language Model
Huayang Li
Deng Cai
J. Xu
Taro Watanabe
VLM
213
2
0
26 Oct 2022
Neural Machine Translation: A Review and Survey
Neural Machine Translation: A Review and SurveyJournal of Artificial Intelligence Research (JAIR), 2019
Felix Stahlberg
3DVAI4TSMedIm
359
379
0
04 Dec 2019
Reflective Decoding Network for Image Captioning
Reflective Decoding Network for Image CaptioningIEEE International Conference on Computer Vision (ICCV), 2019
Lei Ke
Wenjie Pei
Ruiyu Li
Xiaoyong Shen
Yu-Wing Tai
ObjD
174
105
0
30 Aug 2019
Middle-Out Decoding
Middle-Out Decoding
Shikib Mehri
Leonid Sigal
152
22
0
28 Oct 2018
Integrating Weakly Supervised Word Sense Disambiguation into Neural
  Machine Translation
Integrating Weakly Supervised Word Sense Disambiguation into Neural Machine Translation
X. Pu
Nikolaos Pappas
James Henderson
Andrei Popescu-Belis
89
42
0
05 Oct 2018
Document-Level Neural Machine Translation with Hierarchical Attention
  Networks
Document-Level Neural Machine Translation with Hierarchical Attention Networks
Lesly Miculicich
Dhananjay Ram
Nikolaos Pappas
James Henderson
AIMat
228
284
0
05 Sep 2018
Frustratingly Short Attention Spans in Neural Language Modeling
Frustratingly Short Attention Spans in Neural Language ModelingInternational Conference on Learning Representations (ICLR), 2017
Michal Daniluk
Tim Rocktaschel
Johannes Welbl
Sebastian Riedel
241
117
0
15 Feb 2017
1