Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2406.04175
Cited By
Confabulation: The Surprising Value of Large Language Model Hallucinations
6 June 2024
Peiqi Sui
Eamon Duede
Sophie Wu
Richard Jean So
HILM
LLMAG
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Confabulation: The Surprising Value of Large Language Model Hallucinations"
9 / 9 papers shown
Title
OAEI-LLM-T: A TBox Benchmark Dataset for Understanding Large Language Model Hallucinations in Ontology Matching
Zhangcheng Qiang
Kerry Taylor
Weiqing Wang
Jing Jiang
52
0
0
25 Mar 2025
Valuable Hallucinations: Realizable Non-realistic Propositions
Qiucheng Chen
Bo Wang
LRM
54
0
0
16 Feb 2025
DocETL: Agentic Query Rewriting and Evaluation for Complex Document Processing
Shreya Shankar
Tristan Chambers
Eugene Wu
Aditya G. Parameswaran
Eugene Wu
LLMAG
53
5
0
16 Oct 2024
Hallucination Detection and Hallucination Mitigation: An Investigation
Junliang Luo
Tianyu Li
Di Wu
Michael R. M. Jenkin
Steve Liu
Gregory Dudek
HILM
LLMAG
39
20
0
16 Jan 2024
Cognitive Dissonance: Why Do Language Model Outputs Disagree with Internal Representations of Truthfulness?
Kevin Liu
Stephen Casper
Dylan Hadfield-Menell
Jacob Andreas
HILM
52
35
0
27 Nov 2023
Correction with Backtracking Reduces Hallucination in Summarization
Zhenzhen Liu
Chao-gang Wan
Varsha Kishore
Jin Peng Zhou
Minmin Chen
Kilian Q. Weinberger
HILM
24
3
0
24 Oct 2023
The Dark Side of ChatGPT: Legal and Ethical Challenges from Stochastic Parrots and Hallucination
Z. Li
AILaw
SILM
39
34
0
21 Apr 2023
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
Hallucinated but Factual! Inspecting the Factuality of Hallucinations in Abstractive Summarization
Mengyao Cao
Yue Dong
Jackie C.K. Cheung
HILM
170
144
0
30 Aug 2021
1