ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.01563
  4. Cited By
Mitigating LLM Hallucinations via Conformal Abstention

Mitigating LLM Hallucinations via Conformal Abstention

4 April 2024
Yasin Abbasi-Yadkori
Ilja Kuzborskij
David Stutz
András György
Adam Fisch
Arnaud Doucet
Iuliya Beloshapka
Wei-Hung Weng
Yao-Yuan Yang
Csaba Szepesvári
A. Cemgil
Nenad Tomašev
    HILM
ArXivPDFHTML

Papers citing "Mitigating LLM Hallucinations via Conformal Abstention"

11 / 11 papers shown
Title
MAQA: Evaluating Uncertainty Quantification in LLMs Regarding Data Uncertainty
MAQA: Evaluating Uncertainty Quantification in LLMs Regarding Data Uncertainty
Yongjin Yang
Haneul Yoo
Hwaran Lee
58
1
0
13 Aug 2024
Do Not Design, Learn: A Trainable Scoring Function for Uncertainty Estimation in Generative LLMs
Do Not Design, Learn: A Trainable Scoring Function for Uncertainty Estimation in Generative LLMs
D. Yaldiz
Yavuz Faruk Bakman
Baturalp Buyukates
Chenyang Tao
Anil Ramakrishna
Dimitrios Dimitriadis
Jieyu Zhao
Salman Avestimehr
32
1
0
17 Jun 2024
Conformal Language Modeling
Conformal Language Modeling
Victor Quach
Adam Fisch
Tal Schuster
Adam Yala
J. Sohn
Tommi Jaakkola
Regina Barzilay
74
55
0
16 Jun 2023
Conformal Nucleus Sampling
Conformal Nucleus Sampling
Shauli Ravfogel
Carlos Wert Carvajal
M.F. Eggl
UQLM
59
20
0
04 May 2023
The Internal State of an LLM Knows When It's Lying
The Internal State of an LLM Knows When It's Lying
A. Azaria
Tom Michael Mitchell
HILM
210
297
0
26 Apr 2023
SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for
  Generative Large Language Models
SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models
Potsawee Manakul
Adian Liusie
Mark J. F. Gales
HILM
LRM
145
386
0
15 Mar 2023
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Xuezhi Wang
Jason W. Wei
Dale Schuurmans
Quoc Le
Ed H. Chi
Sharan Narang
Aakanksha Chowdhery
Denny Zhou
ReLM
BDL
LRM
AI4CE
297
3,163
0
21 Mar 2022
Learn then Test: Calibrating Predictive Algorithms to Achieve Risk
  Control
Learn then Test: Calibrating Predictive Algorithms to Achieve Risk Control
Anastasios Nikolas Angelopoulos
Stephen Bates
Emmanuel J. Candès
Michael I. Jordan
Lihua Lei
86
125
0
03 Oct 2021
Distribution-Free, Risk-Controlling Prediction Sets
Distribution-Free, Risk-Controlling Prediction Sets
Stephen Bates
Anastasios Nikolas Angelopoulos
Lihua Lei
Jitendra Malik
Michael I. Jordan
OOD
173
184
0
07 Jan 2021
Reducing conversational agents' overconfidence through linguistic
  calibration
Reducing conversational agents' overconfidence through linguistic calibration
Sabrina J. Mielke
Arthur Szlam
Emily Dinan
Y-Lan Boureau
195
108
0
30 Dec 2020
Simple and Scalable Predictive Uncertainty Estimation using Deep
  Ensembles
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
Balaji Lakshminarayanan
Alexander Pritzel
Charles Blundell
UQCV
BDL
268
5,635
0
05 Dec 2016
1