Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2402.10693
Cited By
Exploring Precision and Recall to assess the quality and diversity of LLMs
16 February 2024
Florian Le Bronnec
Alexandre Verine
Benjamin Négrevergne
Y. Chevaleyre
Alexandre Allauzen
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Exploring Precision and Recall to assess the quality and diversity of LLMs"
8 / 8 papers shown
Title
Base Models Beat Aligned Models at Randomness and Creativity
Peter West
Christopher Potts
34
0
0
30 Apr 2025
Measuring Diversity in Synthetic Datasets
Yuchang Zhu
Huizhe Zhang
Bingzhe Wu
Jintang Li
Zibin Zheng
Peilin Zhao
Liang Chen
Yatao Bian
90
0
0
12 Feb 2025
Benchmarking Language Model Creativity: A Case Study on Code Generation
Yining Lu
Dixuan Wang
Tianjian Li
Dongwei Jiang
Daniel Khashabi
Meng Jiang
Daniel Khashabi
LRM
49
10
0
12 Jul 2024
From Distributional to Overton Pluralism: Investigating Large Language Model Alignment
Thom Lake
Eunsol Choi
Greg Durrett
34
9
0
25 Jun 2024
Understanding the Effects of RLHF on LLM Generalisation and Diversity
Robert Kirk
Ishita Mediratta
Christoforos Nalmpantis
Jelena Luketina
Eric Hambro
Edward Grefenstette
Roberta Raileanu
AI4CE
ALM
95
63
0
10 Oct 2023
On the Usefulness of Embeddings, Clusters and Strings for Text Generator Evaluation
Tiago Pimentel
Clara Meister
Ryan Cotterell
27
7
0
31 May 2022
StyleGAN-XL: Scaling StyleGAN to Large Diverse Datasets
Axel Sauer
Katja Schwarz
Andreas Geiger
174
354
0
01 Feb 2022
How Faithful is your Synthetic Data? Sample-level Metrics for Evaluating and Auditing Generative Models
Ahmed Alaa
B. V. Breugel
Evgeny S. Saveliev
M. Schaar
36
135
0
17 Feb 2021
1