Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2406.00975
Cited By
Luna: An Evaluation Foundation Model to Catch Language Model Hallucinations with High Accuracy and Low Cost
3 June 2024
Masha Belyi
Robert Friel
Shuai Shao
Atindriyo Sanyal
HILM
RALM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Luna: An Evaluation Foundation Model to Catch Language Model Hallucinations with High Accuracy and Low Cost"
7 / 7 papers shown
Title
Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations?
Zorik Gekhman
G. Yona
Roee Aharoni
Matan Eyal
Amir Feder
Roi Reichart
Jonathan Herzig
28
28
0
09 May 2024
Can Large Language Models Be an Alternative to Human Evaluations?
Cheng-Han Chiang
Hung-yi Lee
ALM
LM&MA
180
353
0
03 May 2023
The Internal State of an LLM Knows When It's Lying
A. Azaria
Tom Michael Mitchell
HILM
181
192
0
26 Apr 2023
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
270
8,441
0
04 Mar 2022
Hallucinated but Factual! Inspecting the Factuality of Hallucinations in Abstractive Summarization
Mengyao Cao
Yue Dong
Jackie C.K. Cheung
HILM
137
116
0
30 Aug 2021
Evaluating Attribution in Dialogue Systems: The BEGIN Benchmark
Nouha Dziri
Hannah Rashkin
Tal Linzen
David Reitter
ALM
161
69
0
30 Apr 2021
PubMedQA: A Dataset for Biomedical Research Question Answering
Qiao Jin
Bhuwan Dhingra
Zhengping Liu
William W. Cohen
Xinghua Lu
178
554
0
13 Sep 2019
1