Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2407.12847
Cited By
Aligning Model Evaluations with Human Preferences: Mitigating Token Count Bias in Language Model Assessments
5 July 2024
Roland Daynauth
Jason Mars
ALM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Aligning Model Evaluations with Human Preferences: Mitigating Token Count Bias in Language Model Assessments"
Title
No papers