Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2401.06568
Cited By
Lost in the Source Language: How Large Language Models Evaluate the Quality of Machine Translation
12 January 2024
Xu Huang
Zhirui Zhang
Xiang Geng
Yichao Du
Jiajun Chen
Shujian Huang
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Lost in the Source Language: How Large Language Models Evaluate the Quality of Machine Translation"
6 / 6 papers shown
Title
An LLM-as-a-judge Approach for Scalable Gender-Neutral Translation Evaluation
Andrea Piergentili
Beatrice Savoldi
Matteo Negri
L. Bentivogli
ELM
35
0
0
16 Apr 2025
Generating Medically-Informed Explanations for Depression Detection using LLMs
Xiangyong Chen
Xiaochuan Lin
AI4MH
54
0
0
18 Mar 2025
When LLMs Struggle: Reference-less Translation Evaluation for Low-resource Languages
Archchana Sindhujan
Diptesh Kanojia
Constantin Orasan
Shenbin Qian
33
1
0
08 Jan 2025
From Generation to Judgment: Opportunities and Challenges of LLM-as-a-judge
Dawei Li
Bohan Jiang
Liangjie Huang
Alimohammad Beigi
Chengshuai Zhao
...
Canyu Chen
Tianhao Wu
Kai Shu
Lu Cheng
Huan Liu
ELM
AILaw
106
61
0
25 Nov 2024
What do Large Language Models Need for Machine Translation Evaluation?
Shenbin Qian
Archchana Sindhujan
Minnie Kabra
Diptesh Kanojia
Constantin Orasan
Tharindu Ranasinghe
Frédéric Blain
ELM
LRM
ALM
LM&MA
18
0
0
04 Oct 2024
MQM-APE: Toward High-Quality Error Annotation Predictors with Automatic Post-Editing in LLM Translation Evaluators
Qingyu Lu
Liang Ding
Kanjian Zhang
Jinxia Zhang
Dacheng Tao
22
3
0
22 Sep 2024
1