Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2406.17642
Cited By
v1
v2 (latest)
Banishing LLM Hallucinations Requires Rethinking Generalization
25 June 2024
Johnny Li
Saksham Consul
Eda Zhou
James Wong
Naila Farooqui
Yuxin Ye
Nithyashree Manohar
Zhuxiaona Wei
Tian Wu
Ben Echols
Sharon Zhou
Gregory Diamos
LRM
Re-assign community
ArXiv (abs)
PDF
HTML
HuggingFace (1 upvotes)
Papers citing
"Banishing LLM Hallucinations Requires Rethinking Generalization"
10 / 10 papers shown
Title
DTKG: Dual-Track Knowledge Graph-Verified Reasoning Framework for Multi-Hop QA
Changhao Wang
Yanfang Liu
Xinxin Fan
Anzhi Zhou
Lao Tian
Yunfeng Lu
LRM
68
0
0
18 Oct 2025
LLM-based Agents Suffer from Hallucinations: A Survey of Taxonomy, Methods, and Directions
Xixun Lin
Yucheng Ning
Jingwen Zhang
Yan Dong
Y. Liu
...
Bin Wang
Yanan Cao
Kai-xiang Chen
Songlin Hu
Li Guo
LLMAG
LRM
246
4
0
23 Sep 2025
Probabilistic Runtime Verification, Evaluation and Risk Assessment of Visual Deep Learning Systems
Birk Torpmann-Hagen
Pål Halvorsen
Michael A. Riegler
Dag Johansen
77
0
0
23 Sep 2025
CCL-XCoT: An Efficient Cross-Lingual Knowledge Transfer Method for Mitigating Hallucination Generation
Weihua Zheng
Roy Ka-Wei Lee
Zhengyuan Liu
Kui Wu
AiTi Aw
Bowei Zou
HILM
LRM
83
2
0
17 Jul 2025
`Generalization is hallucination' through the lens of tensor completions
Liang Ze Wong
VLM
190
1
0
24 Feb 2025
KnowPath: Knowledge-enhanced Reasoning via LLM-generated Inference Paths over Knowledge Graphs
Qi Zhao
Hongyu Yang
Qi Song
Xinwei Yao
Xiangyang Li
364
3
0
17 Feb 2025
Scopes of Alignment
Kush R. Varshney
Zahra Ashktorab
Djallel Bouneffouf
Matthew D Riemer
Justin D. Weisz
203
0
0
15 Jan 2025
Did You Hear That? Introducing AADG: A Framework for Generating Benchmark Data in Audio Anomaly Detection
Ksheeraja Raghavan
Samiran Gode
Ankit Parag Shah
Surabhi Raghavan
Wolfram Burgard
Bhiksha Raj
Rita Singh
205
0
0
04 Oct 2024
Towards a Science Exocortex
Kevin G. Yager
286
5
0
24 Jun 2024
Hallucination is Inevitable: An Innate Limitation of Large Language Models
Ziwei Xu
Sanjay Jain
Mohan S. Kankanhalli
HILM
LRM
509
434
0
22 Jan 2024
1