ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1808.10432
  4. Cited By
Attaining the Unattainable? Reassessing Claims of Human Parity in Neural
  Machine Translation

Attaining the Unattainable? Reassessing Claims of Human Parity in Neural Machine Translation

30 August 2018
Antonio Toral
Sheila Castilho
Ke Hu
Andy Way
ArXivPDFHTML

Papers citing "Attaining the Unattainable? Reassessing Claims of Human Parity in Neural Machine Translation"

21 / 21 papers shown
Title
A comparison of translation performance between DeepL and Supertext
A comparison of translation performance between DeepL and Supertext
Alex Flückiger
Chantal Amrhein
Tim Graf
Frédéric Odermatt
Martin Pömsl
Philippe Schläpfer
Florian Schottmann
Samuel Laubli
ELM
40
0
0
04 Feb 2025
How Good Are LLMs for Literary Translation, Really? Literary Translation Evaluation with Humans and LLMs
How Good Are LLMs for Literary Translation, Really? Literary Translation Evaluation with Humans and LLMs
Ran Zhang
Wei-Ye Zhao
Steffen Eger
71
4
0
24 Oct 2024
MQM-Chat: Multidimensional Quality Metrics for Chat Translation
MQM-Chat: Multidimensional Quality Metrics for Chat Translation
Yunmeng Li
Jun Suzuki
Makoto Morishita
Kaori Abe
Kentaro Inui
30
2
0
29 Aug 2024
Human Evaluation of English--Irish Transformer-Based NMT
Human Evaluation of English--Irish Transformer-Based NMT
Séamus Lankford
Haithem Afli
Andy Way
35
10
0
04 Mar 2024
Chat Translation Error Detection for Assisting Cross-lingual
  Communications
Chat Translation Error Detection for Assisting Cross-lingual Communications
Yunmeng Li
Jun Suzuki
Makoto Morishita
Kaori Abe
Ryoko Tokuhisa
Ana Brassard
Kentaro Inui
18
4
0
02 Aug 2023
Investigating the Translation Performance of a Large Multilingual
  Language Model: the Case of BLOOM
Investigating the Translation Performance of a Large Multilingual Language Model: the Case of BLOOM
Rachel Bawden
François Yvon
VLM
LRM
25
60
0
03 Mar 2023
Democratizing Neural Machine Translation with OPUS-MT
Democratizing Neural Machine Translation with OPUS-MT
Jörg Tiedemann
Mikko Aulamo
Daria Bakshandaeva
M. Boggia
Stig-Arne Gronroos
Tommi Nieminen
Alessandro Raganato
Yves Scherrer
Raúl Vázquez
Sami Virpioja
18
26
0
04 Dec 2022
Explaining Translationese: why are Neural Classifiers Better and what do
  they Learn?
Explaining Translationese: why are Neural Classifiers Better and what do they Learn?
Kwabena Amponsah-Kaakyire
Daria Pylypenko
Josef van Genabith
C. España-Bonet
24
6
0
24 Oct 2022
Searching for a higher power in the human evaluation of MT
Searching for a higher power in the human evaluation of MT
Johnny Tian-Zheng Wei
Tom Kocmi
C. Federmann
18
6
0
20 Oct 2022
Discourse Cohesion Evaluation for Document-Level Neural Machine
  Translation
Discourse Cohesion Evaluation for Document-Level Neural Machine Translation
Xin Tan
Longyin Zhang
Guodong Zhou
13
1
0
19 Aug 2022
The Fallacy of AI Functionality
The Fallacy of AI Functionality
Inioluwa Deborah Raji
Indra Elizabeth Kumar
Aaron Horowitz
Andrew D. Selbst
28
179
0
20 Jun 2022
Crossmodal-3600: A Massively Multilingual Multimodal Evaluation Dataset
Crossmodal-3600: A Massively Multilingual Multimodal Evaluation Dataset
Ashish V. Thapliyal
Jordi Pont-Tuset
Xi Chen
Radu Soricut
VGen
86
72
0
25 May 2022
Towards Debiasing Translation Artifacts
Towards Debiasing Translation Artifacts
Koel Dutta Chowdhury
Rricha Jalota
C. España-Bonet
Josef van Genabith
23
6
0
16 May 2022
Survey of Low-Resource Machine Translation
Survey of Low-Resource Machine Translation
Barry Haddow
Rachel Bawden
Antonio Valerio Miceli Barone
Jindvrich Helcl
Alexandra Birch
AIMat
31
147
0
01 Sep 2021
Human Evaluation of Creative NLG Systems: An Interdisciplinary Survey on
  Recent Papers
Human Evaluation of Creative NLG Systems: An Interdisciplinary Survey on Recent Papers
Mika Hämäläinen
Khalid Alnajjar
ELM
LM&MA
25
16
0
31 Jul 2021
Experts, Errors, and Context: A Large-Scale Study of Human Evaluation
  for Machine Translation
Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine Translation
Markus Freitag
George F. Foster
David Grangier
Viresh Ratnakar
Qijun Tan
Wolfgang Macherey
19
373
0
29 Apr 2021
Reassessing Claims of Human Parity and Super-Human Performance in
  Machine Translation at WMT 2019
Reassessing Claims of Human Parity and Super-Human Performance in Machine Translation at WMT 2019
Antonio Toral
13
43
0
12 May 2020
A Set of Recommendations for Assessing Human-Machine Parity in Language
  Translation
A Set of Recommendations for Assessing Human-Machine Parity in Language Translation
Samuel Laubli
Sheila Castilho
Graham Neubig
Rico Sennrich
Qinlan Shen
Antonio Toral
16
96
0
03 Apr 2020
Translationese as a Language in "Multilingual" NMT
Translationese as a Language in "Multilingual" NMT
Parker Riley
Isaac Caswell
Markus Freitag
David Grangier
24
42
0
10 Nov 2019
Google's Neural Machine Translation System: Bridging the Gap between
  Human and Machine Translation
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu
M. Schuster
Z. Chen
Quoc V. Le
Mohammad Norouzi
...
Alex Rudnick
Oriol Vinyals
G. Corrado
Macduff Hughes
J. Dean
AIMat
716
6,743
0
26 Sep 2016
Neural versus Phrase-Based Machine Translation Quality: a Case Study
Neural versus Phrase-Based Machine Translation Quality: a Case Study
L. Bentivogli
Arianna Bisazza
Mauro Cettolo
Marcello Federico
191
328
0
16 Aug 2016
1