Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2406.11267
Cited By
Mitigating Large Language Model Hallucination with Faithful Finetuning
17 June 2024
Minda Hu
Bowei He
Yufei Wang
Liangyou Li
Chen-li Ma
Irwin King
HILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Mitigating Large Language Model Hallucination with Faithful Finetuning"
7 / 7 papers shown
Title
LowRA: Accurate and Efficient LoRA Fine-Tuning of LLMs under 2 Bits
Zikai Zhou
Qizheng Zhang
Hermann Kumbong
Kunle Olukotun
MQ
135
0
0
12 Feb 2025
Do Robot Snakes Dream like Electric Sheep? Investigating the Effects of Architectural Inductive Biases on Hallucination
Jerry Huang
Prasanna Parthasarathi
Mehdi Rezagholizadeh
Boxing Chen
Sarath Chandar
48
0
0
22 Oct 2024
Mechanistic Understanding and Mitigation of Language Model Non-Factual Hallucinations
Lei Yu
Meng Cao
Jackie Chi Kit Cheung
Yue Dong
HILM
33
6
0
27 Mar 2024
Enhancing Uncertainty-Based Hallucination Detection with Stronger Focus
Tianhang Zhang
Lin Qiu
Qipeng Guo
Cheng Deng
Yue Zhang
Zheng-Wei Zhang
Cheng Zhou
Xinbing Wang
Luoyi Fu
HILM
75
46
0
22 Nov 2023
Detecting and Mitigating Hallucinations in Multilingual Summarisation
Yifu Qiu
Yftah Ziser
Anna Korhonen
E. Ponti
Shay B. Cohen
HILM
49
42
0
23 May 2023
The Factual Inconsistency Problem in Abstractive Text Summarization: A Survey
Yi-Chong Huang
Xiachong Feng
Xiaocheng Feng
Bing Qin
HILM
128
104
0
30 Apr 2021
Understanding Factuality in Abstractive Summarization with FRANK: A Benchmark for Factuality Metrics
Artidoro Pagnoni
Vidhisha Balachandran
Yulia Tsvetkov
HILM
215
305
0
27 Apr 2021
1