ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.11218
  4. Cited By
Promoting the Knowledge of Source Syntax in Transformer NMT Is Not
  Needed

Promoting the Knowledge of Source Syntax in Transformer NMT Is Not Needed

24 October 2019
Thuong-Hai Pham
Dominik Machácek
Ondrej Bojar
ArXivPDFHTML

Papers citing "Promoting the Knowledge of Source Syntax in Transformer NMT Is Not Needed"

3 / 3 papers shown
Title
Hard-Coded Gaussian Attention for Neural Machine Translation
Hard-Coded Gaussian Attention for Neural Machine Translation
Weiqiu You
Simeng Sun
Mohit Iyyer
33
67
0
02 May 2020
Fixed Encoder Self-Attention Patterns in Transformer-Based Machine
  Translation
Fixed Encoder Self-Attention Patterns in Transformer-Based Machine Translation
Alessandro Raganato
Yves Scherrer
Jörg Tiedemann
32
92
0
24 Feb 2020
Six Challenges for Neural Machine Translation
Six Challenges for Neural Machine Translation
Philipp Koehn
Rebecca Knowles
AAML
AIMat
224
1,211
0
12 Jun 2017
1