Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2406.00975
Cited By
Luna: An Evaluation Foundation Model to Catch Language Model Hallucinations with High Accuracy and Low Cost
3 June 2024
Masha Belyi
Robert Friel
Shuai Shao
Atindriyo Sanyal
HILM
RALM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Luna: An Evaluation Foundation Model to Catch Language Model Hallucinations with High Accuracy and Low Cost"
7 / 7 papers shown
Title
Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations?
Zorik Gekhman
G. Yona
Roee Aharoni
Matan Eyal
Amir Feder
Roi Reichart
Jonathan Herzig
32
28
0
09 May 2024
Can Large Language Models Be an Alternative to Human Evaluations?
Cheng-Han Chiang
Hung-yi Lee
ALM
LM&MA
195
353
0
03 May 2023
The Internal State of an LLM Knows When It's Lying
A. Azaria
Tom Michael Mitchell
HILM
197
192
0
26 Apr 2023
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
298
8,441
0
04 Mar 2022
Hallucinated but Factual! Inspecting the Factuality of Hallucinations in Abstractive Summarization
Mengyao Cao
Yue Dong
Jackie C.K. Cheung
HILM
159
116
0
30 Aug 2021
Evaluating Attribution in Dialogue Systems: The BEGIN Benchmark
Nouha Dziri
Hannah Rashkin
Tal Linzen
David Reitter
ALM
171
69
0
30 Apr 2021
PubMedQA: A Dataset for Biomedical Research Question Answering
Qiao Jin
Bhuwan Dhingra
Zhengping Liu
William W. Cohen
Xinghua Lu
180
554
0
13 Sep 2019
1