ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2401.08694
  4. Cited By
Combining Confidence Elicitation and Sample-based Methods for
  Uncertainty Quantification in Misinformation Mitigation

Combining Confidence Elicitation and Sample-based Methods for Uncertainty Quantification in Misinformation Mitigation

13 January 2024
Mauricio Rivera
Jean-François Godbout
Reihaneh Rabbany
Kellin Pelrine
    HILM
ArXivPDFHTML

Papers citing "Combining Confidence Elicitation and Sample-based Methods for Uncertainty Quantification in Misinformation Mitigation"

9 / 9 papers shown
Title
Comparing Uncertainty Measurement and Mitigation Methods for Large Language Models: A Systematic Review
Comparing Uncertainty Measurement and Mitigation Methods for Large Language Models: A Systematic Review
Toghrul Abbasli
Kentaroh Toyoda
Yuan Wang
Leon Witt
Muhammad Asif Ali
Yukai Miao
Dan Li
Qingsong Wei
UQCV
92
0
0
25 Apr 2025
Agent-Based Uncertainty Awareness Improves Automated Radiology Report Labeling with an Open-Source Large Language Model
Agent-Based Uncertainty Awareness Improves Automated Radiology Report Labeling with an Open-Source Large Language Model
Hadas Ben-Atya
N. Gavrielov
Zvi Badash
G. Focht
R. Cytter-Kuint
Talar Hagopian
Dan Turner
M. Freiman
61
0
0
02 Feb 2025
Label-Confidence-Aware Uncertainty Estimation in Natural Language
  Generation
Label-Confidence-Aware Uncertainty Estimation in Natural Language Generation
Qinhong Lin
Linna Zhou
Zhongliang Yang
Yuang Cai
HILM
75
0
0
10 Dec 2024
Epistemic Integrity in Large Language Models
Epistemic Integrity in Large Language Models
Bijean Ghafouri
Shahrad Mohammadzadeh
James Zhou
Pratheeksha Nair
Jacob-Junqi Tian
Mayank Goel
Reihaneh Rabbany
Jean-Francois Godbout
Kellin Pelrine
HILM
34
0
0
10 Nov 2024
Calibrating Verbalized Probabilities for Large Language Models
Calibrating Verbalized Probabilities for Large Language Models
Cheng Wang
Gyuri Szarvas
Georges Balazs
Pavel Danchenko
P. Ernst
15
0
0
09 Oct 2024
Enhancing Healthcare LLM Trust with Atypical Presentations Recalibration
Enhancing Healthcare LLM Trust with Atypical Presentations Recalibration
Jeremy Qin
Bang Liu
Quoc Dinh Nguyen
35
2
0
05 Sep 2024
Web Retrieval Agents for Evidence-Based Misinformation Detection
Web Retrieval Agents for Evidence-Based Misinformation Detection
Jacob-Junqi Tian
Hao Yu
Yury Orlovskiy
Tyler Vergho
Mauricio Rivera
Mayank Goel
Zachary Yang
Jean-Francois Godbout
Reihaneh Rabbany
Kellin Pelrine
LLMAG
OffRL
23
4
0
15 Aug 2024
LUQ: Long-text Uncertainty Quantification for LLMs
LUQ: Long-text Uncertainty Quantification for LLMs
Caiqi Zhang
Fangyu Liu
Marco Basaldella
Nigel Collier
HILM
50
24
0
29 Mar 2024
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Xuezhi Wang
Jason W. Wei
Dale Schuurmans
Quoc Le
Ed H. Chi
Sharan Narang
Aakanksha Chowdhery
Denny Zhou
ReLM
BDL
LRM
AI4CE
314
3,237
0
21 Mar 2022
1