Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2405.06545
Cited By
Mitigating Hallucinations in Large Language Models via Self-Refinement-Enhanced Knowledge Retrieval
10 May 2024
Mengjia Niu
Hao Li
Jie Shi
Hamed Haddadi
Fan Mo
HILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Mitigating Hallucinations in Large Language Models via Self-Refinement-Enhanced Knowledge Retrieval"
6 / 6 papers shown
Title
Universal Collection of Euclidean Invariants between Pairs of Position-Orientations
Gijs Bellaard
B. Smets
R. Duits
59
0
0
04 Apr 2025
TPU-Gen: LLM-Driven Custom Tensor Processing Unit Generator
Deepak Vungarala
Mohammed E. Elbtity
Sumiya Syed
Sakila Alam
Kartik Pandit
Arnob Ghosh
Ramtin Zand
Shaahin Angizi
29
1
0
07 Mar 2025
In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation
Shiqi Chen
Miao Xiong
Junteng Liu
Zhengxuan Wu
Teng Xiao
Siyang Gao
Junxian He
HILM
51
21
0
03 Mar 2024
Enhancing Uncertainty-Based Hallucination Detection with Stronger Focus
Tianhang Zhang
Lin Qiu
Qipeng Guo
Cheng Deng
Yue Zhang
Zheng-Wei Zhang
Cheng Zhou
Xinbing Wang
Luoyi Fu
HILM
75
46
0
22 Nov 2023
Large Language Models Meet Knowledge Graphs to Answer Factoid Questions
Mikhail Salnikov
Hai Le
Prateek Rajput
Irina Nikishina
Pavel Braslavski
Valentin Malykh
Alexander Panchenko
KELM
22
13
0
03 Oct 2023
The Internal State of an LLM Knows When It's Lying
A. Azaria
Tom Michael Mitchell
HILM
216
297
0
26 Apr 2023
1