Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2205.06828
Cited By
Deconstructing NLG Evaluation: Evaluation Practices, Assumptions, and Their Implications
13 May 2022
Kaitlyn Zhou
Su Lin Blodgett
Adam Trischler
Hal Daumé
Kaheer Suleman
Alexandra Olteanu
ELM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Deconstructing NLG Evaluation: Evaluation Practices, Assumptions, and Their Implications"
6 / 6 papers shown
Title
LLM-Rubric: A Multidimensional, Calibrated Approach to Automated Evaluation of Natural Language Texts
Helia Hashemi
J. Eisner
Corby Rosset
Benjamin Van Durme
Chris Kedzie
54
1
0
03 Jan 2025
Responsible AI Considerations in Text Summarization Research: A Review of Current Practices
Yu Lu Liu
Meng Cao
Su Lin Blodgett
Jackie Chi Kit Cheung
Alexandra Olteanu
Adam Trischler
6
1
0
18 Nov 2023
Rethinking Model Evaluation as Narrowing the Socio-Technical Gap
Q. V. Liao
Ziang Xiao
ALM
ELM
33
26
0
01 Jun 2023
The Perils of Using Mechanical Turk to Evaluate Open-Ended Text Generation
Marzena Karpinska
Nader Akoury
Mohit Iyyer
198
106
0
14 Sep 2021
Perturbation CheckLists for Evaluating NLG Evaluation Metrics
Ananya B. Sai
Tanay Dixit
D. Y. Sheth
S. Mohan
Mitesh M. Khapra
AAML
85
55
0
13 Sep 2021
The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics
Sebastian Gehrmann
Tosin P. Adewumi
Karmanya Aggarwal
Pawan Sasanka Ammanamanchi
Aremu Anuoluwapo
...
Nishant Subramani
Wei-ping Xu
Diyi Yang
Akhila Yerukola
Jiawei Zhou
VLM
238
254
0
02 Feb 2021
1