ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.06545
  4. Cited By
Mitigating Hallucinations in Large Language Models via
  Self-Refinement-Enhanced Knowledge Retrieval

Mitigating Hallucinations in Large Language Models via Self-Refinement-Enhanced Knowledge Retrieval

10 May 2024
Mengjia Niu
Hao Li
Jie Shi
Hamed Haddadi
Fan Mo
    HILM
ArXivPDFHTML

Papers citing "Mitigating Hallucinations in Large Language Models via Self-Refinement-Enhanced Knowledge Retrieval"

6 / 6 papers shown
Title
Universal Collection of Euclidean Invariants between Pairs of Position-Orientations
Universal Collection of Euclidean Invariants between Pairs of Position-Orientations
Gijs Bellaard
B. Smets
R. Duits
59
0
0
04 Apr 2025
TPU-Gen: LLM-Driven Custom Tensor Processing Unit Generator
Deepak Vungarala
Mohammed E. Elbtity
Sumiya Syed
Sakila Alam
Kartik Pandit
Arnob Ghosh
Ramtin Zand
Shaahin Angizi
29
1
0
07 Mar 2025
In-Context Sharpness as Alerts: An Inner Representation Perspective for
  Hallucination Mitigation
In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation
Shiqi Chen
Miao Xiong
Junteng Liu
Zhengxuan Wu
Teng Xiao
Siyang Gao
Junxian He
HILM
51
21
0
03 Mar 2024
Enhancing Uncertainty-Based Hallucination Detection with Stronger Focus
Enhancing Uncertainty-Based Hallucination Detection with Stronger Focus
Tianhang Zhang
Lin Qiu
Qipeng Guo
Cheng Deng
Yue Zhang
Zheng-Wei Zhang
Cheng Zhou
Xinbing Wang
Luoyi Fu
HILM
75
47
0
22 Nov 2023
Large Language Models Meet Knowledge Graphs to Answer Factoid Questions
Large Language Models Meet Knowledge Graphs to Answer Factoid Questions
Mikhail Salnikov
Hai Le
Prateek Rajput
Irina Nikishina
Pavel Braslavski
Valentin Malykh
Alexander Panchenko
KELM
22
13
0
03 Oct 2023
The Internal State of an LLM Knows When It's Lying
The Internal State of an LLM Knows When It's Lying
A. Azaria
Tom Michael Mitchell
HILM
216
298
0
26 Apr 2023
1