Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2312.15576
Cited By
Reducing LLM Hallucinations using Epistemic Neural Networks
25 December 2023
Shreyas Verma
Kien Tran
Yusuf Ali
Guangyu Min
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Reducing LLM Hallucinations using Epistemic Neural Networks"
8 / 8 papers shown
Title
Zero-resource Hallucination Detection for Text Generation via Graph-based Contextual Knowledge Triples Modeling
Xinyue Fang
Zhen Huang
Zhiliang Tian
Minghui Fang
Ziyi Pan
Quntian Fang
Zhihua Wen
Hengyue Pan
Dongsheng Li
HILM
88
2
0
17 Sep 2024
Knowledge Overshadowing Causes Amalgamated Hallucination in Large Language Models
Yuji Zhang
Sha Li
Jiateng Liu
Pengfei Yu
Yi Ren Fung
Jing Li
Manling Li
Heng Ji
29
10
0
10 Jul 2024
Uncertainty Estimation and Quantification for LLMs: A Simple Supervised Approach
Linyu Liu
Yu Pan
Xiaocheng Li
Guanting Chen
30
24
0
24 Apr 2024
Multicalibration for Confidence Scoring in LLMs
Gianluca Detommaso
Martín Bertrán
Riccardo Fogliato
Aaron Roth
24
12
0
06 Apr 2024
Science Checker Reloaded: A Bidirectional Paradigm for Transparency and Logical Reasoning
Loïc Rakotoson
S. Massip
F. Laleye
35
0
0
21 Feb 2024
Distinguishing the Knowable from the Unknowable with Language Models
Gustaf Ahdritz
Tian Qin
Nikhil Vyas
Boaz Barak
Benjamin L. Edelman
24
18
0
05 Feb 2024
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Xuezhi Wang
Jason W. Wei
Dale Schuurmans
Quoc Le
Ed H. Chi
Sharan Narang
Aakanksha Chowdhery
Denny Zhou
ReLM
BDL
LRM
AI4CE
297
3,217
0
21 Mar 2022
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&Ro
LRM
AI4CE
ReLM
315
8,402
0
28 Jan 2022
1