Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2401.03205
Cited By
The Dawn After the Dark: An Empirical Study on Factuality Hallucination in Large Language Models
6 January 2024
Junyi Li
Jie Chen
Ruiyang Ren
Xiaoxue Cheng
Wayne Xin Zhao
Jian-Yun Nie
Ji-Rong Wen
HILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"The Dawn After the Dark: An Empirical Study on Factuality Hallucination in Large Language Models"
8 / 8 papers shown
Title
aiXamine: Simplified LLM Safety and Security
Fatih Deniz
Dorde Popovic
Yazan Boshmaf
Euisuh Jeong
M. Ahmad
Sanjay Chawla
Issa M. Khalil
ELM
72
0
0
21 Apr 2025
OAEI-LLM-T: A TBox Benchmark Dataset for Understanding Large Language Model Hallucinations in Ontology Matching
Zhangcheng Qiang
Kerry Taylor
Weiqing Wang
Jing Jiang
52
0
0
25 Mar 2025
FaithEval: Can Your Language Model Stay Faithful to Context, Even If "The Moon is Made of Marshmallows"
Yifei Ming
Senthil Purushwalkam
Shrey Pandit
Zixuan Ke
Xuan-Phi Nguyen
Caiming Xiong
Shafiq R. Joty
HILM
110
16
0
30 Sep 2024
How Language Model Hallucinations Can Snowball
Muru Zhang
Ofir Press
William Merrill
Alisa Liu
Noah A. Smith
HILM
LRM
78
246
0
22 May 2023
The Intended Uses of Automated Fact-Checking Artefacts: Why, How and Who
M. Schlichtkrull
N. Ousidhoum
Andreas Vlachos
109
17
0
27 Apr 2023
The Internal State of an LLM Knows When It's Lying
A. Azaria
Tom Michael Mitchell
HILM
216
297
0
26 Apr 2023
Generate rather than Retrieve: Large Language Models are Strong Context Generators
W. Yu
Dan Iter
Shuohang Wang
Yichong Xu
Mingxuan Ju
Soumya Sanyal
Chenguang Zhu
Michael Zeng
Meng-Long Jiang
RALM
AIMat
215
318
0
21 Sep 2022
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
1