ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.14983
  4. Cited By
Reducing conversational agents' overconfidence through linguistic
  calibration
v1v2 (latest)

Reducing conversational agents' overconfidence through linguistic calibration

Transactions of the Association for Computational Linguistics (TACL), 2020
30 December 2020
Sabrina J. Mielke
Arthur Szlam
Emily Dinan
Y-Lan Boureau
ArXiv (abs)PDFHTML

Papers citing "Reducing conversational agents' overconfidence through linguistic calibration"

50 / 149 papers shown
On the Entropy Calibration of Language Models
On the Entropy Calibration of Language Models
Steven Cao
Gregory Valiant
Percy Liang
163
2
0
15 Nov 2025
Interpreting and Mitigating Unwanted Uncertainty in LLMs
Interpreting and Mitigating Unwanted Uncertainty in LLMs
Tiasa Singha Roy
Ayush Rajesh Jhaveri
Ilias Triantafyllopoulos
90
1
0
26 Oct 2025
Efficient semantic uncertainty quantification in language models via diversity-steered sampling
Efficient semantic uncertainty quantification in language models via diversity-steered sampling
Ji Won Park
K. Cho
176
0
0
24 Oct 2025
Beyond Accuracy: Are Time Series Foundation Models Well-Calibrated?
Beyond Accuracy: Are Time Series Foundation Models Well-Calibrated?
Coen Adler
Yuxin Chang
Felix Draxler
Samar Abdi
Padhraic Smyth
AI4TS
153
0
0
17 Oct 2025
ESI: Epistemic Uncertainty Quantification via Semantic-preserving Intervention for Large Language Models
ESI: Epistemic Uncertainty Quantification via Semantic-preserving Intervention for Large Language Models
Mingda Li
Xinyu Li
Weinan Zhang
Longxuan Ma
212
0
0
15 Oct 2025
Teaching Language Models to Faithfully Express their Uncertainty
Teaching Language Models to Faithfully Express their Uncertainty
Bryan Eikema
Evgenia Ilia
José G. C. de Souza
Chrysoula Zerva
Wilker Aziz
HILM
216
1
0
14 Oct 2025
SIMBA UQ: Similarity-Based Aggregation for Uncertainty Quantification in Large Language Models
SIMBA UQ: Similarity-Based Aggregation for Uncertainty Quantification in Large Language Models
D. Bhattacharjya
Balaji Ganesan
Junkyu Lee
Radu Marinescu
Katsiaryna Mirylenka
Michael R. Glass
Xiao Shou
161
0
0
10 Oct 2025
LLM Microscope: What Model Internals Reveal About Answer Correctness and Context Utilization
LLM Microscope: What Model Internals Reveal About Answer Correctness and Context Utilization
Jiarui Liu
Jivitesh Jain
Mona T. Diab
Nishant Subramani
198
3
0
05 Oct 2025
Generalized Correctness Models: Learning Calibrated and Model-Agnostic Correctness Predictors from Historical Patterns
Generalized Correctness Models: Learning Calibrated and Model-Agnostic Correctness Predictors from Historical Patterns
Hanqi Xiao
Vaidehi Patil
Hyunji Lee
Elias Stengel-Eskin
Mohit Bansal
230
4
0
29 Sep 2025
Can Large Language Models Express Uncertainty Like Human?
Can Large Language Models Express Uncertainty Like Human?
Linwei Tao
Yi-Fan Yeh
Bo Kai
Minjing Dong
Tao Huang
Tom A. Lamb
Jialin Yu
Philip Torr
Chang Xu
209
2
0
29 Sep 2025
Black-Box Hallucination Detection via Consistency Under the Uncertain Expression
Black-Box Hallucination Detection via Consistency Under the Uncertain Expression
Seongho Joo
Kyungmin Min
Jahyun Koo
Kyomin Jung
HILM
173
3
0
26 Sep 2025
Hallucination reduction with CASAL: Contrastive Activation Steering For Amortized Learning
Hallucination reduction with CASAL: Contrastive Activation Steering For Amortized Learning
Wannan Yang
Xinchi Qiu
L. Yu
Yuchen Zhang
Oliver Aobo Yang
Narine Kokhlikyan
Nicola Cancedda
Diego Garcia-Olano
Diego Garcia-Olano
245
1
0
25 Sep 2025
Estimating Semantic Alphabet Size for LLM Uncertainty Quantification
Estimating Semantic Alphabet Size for LLM Uncertainty Quantification
Lucas H. McCabe
Rimon Melamed
Thomas Hartvigsen
H. H. Huang
183
0
0
17 Sep 2025
Incongruent Positivity: When Miscalibrated Positivity Undermines Online Supportive Conversations
Incongruent Positivity: When Miscalibrated Positivity Undermines Online Supportive Conversations
Leen Almajed
Abeer ALdayel
AI4MH
278
0
0
12 Sep 2025
HalluField: Detecting LLM Hallucinations via Field-Theoretic Modeling
HalluField: Detecting LLM Hallucinations via Field-Theoretic Modeling
Minh Nhat Vu
Brian K. Tran
Syed A. Shah
Geigh Zollicoffer
N. Hoang-Xuan
Manish Bhattarai
178
1
0
12 Sep 2025
PIE: Performance Interval Estimation for Free-Form Generation Tasks
PIE: Performance Interval Estimation for Free-Form Generation Tasks
Chi-Yang Hsu
Alexander Braylan
Yiheng Su
Omar Alonso
Matthew Lease
199
0
0
09 Sep 2025
Why Language Models Hallucinate
Why Language Models Hallucinate
Adam Tauman Kalai
Ofir Nachum
Santosh Vempala
Edwin Zhang
HILMLRM
492
175
0
04 Sep 2025
Hallucinations in Code Change to Natural Language Generation: Prevalence and Evaluation of Detection Metrics
Hallucinations in Code Change to Natural Language Generation: Prevalence and Evaluation of Detection Metrics
Chunhua Liu
Hong Yi Lin
Patanamon Thongtanunam
HILM
169
2
0
12 Aug 2025
Human-Alignment and Calibration of Inference-Time Uncertainty in Large Language Models
Human-Alignment and Calibration of Inference-Time Uncertainty in Large Language Models
Kyle Moore
Jesse Roberts
Daryl Watson
127
1
0
11 Aug 2025
Overconfidence in LLM-as-a-Judge: Diagnosis and Confidence-Driven Solution
Overconfidence in LLM-as-a-Judge: Diagnosis and Confidence-Driven Solution
Zailong Tian
Zhuoheng Han
Yanzhe Chen
Haozhe Xu
Xi Yang
Richeng Xuan
Houfeng Wang
Lizi Liao
ELM
307
12
0
08 Aug 2025
Llama-3.1-FoundationAI-SecurityLLM-8B-Instruct Technical Report
Llama-3.1-FoundationAI-SecurityLLM-8B-Instruct Technical Report
Sajana Weerawardhena
Paul Kassianik
Blaine Nelson
Baturay Saglam
Anu Vellore
...
Dhruv Kedia
Kojin Oshiba
Zhouran Yang
Yaron Singer
Amin Karbasi
ALMELM
275
10
0
01 Aug 2025
Can LLMs Ground when they (Don't) Know: A Study on Direct and Loaded Political QuestionsAnnual Meeting of the Association for Computational Linguistics (ACL), 2025
Clara Lachenmaier
Judith Sieker
Sina Zarrieß
292
4
0
10 Jun 2025
From Calibration to Collaboration: LLM Uncertainty Quantification Should Be More Human-Centered
From Calibration to Collaboration: LLM Uncertainty Quantification Should Be More Human-Centered
Siddartha Devic
Tejas Srinivasan
Jesse Thomason
Willie Neiswanger
Willie Neiswanger
244
14
0
09 Jun 2025
ConfRAG: Confidence-Guided Retrieval-Augmenting Generation
ConfRAG: Confidence-Guided Retrieval-Augmenting Generation
Yin Huang
Yifan Ethan Xu
Kai Sun
Vera Yan
Alicia Sun
...
Aaron Colak
Anuj Kumar
Anuj Kumar
Wen-tau Yih
Xin Luna Dong
HILM
330
1
0
08 Jun 2025
Large Language Models Can Be a Viable Substitute for Expert Political Surveys When a Shock Disrupts Traditional Measurement Approaches
Large Language Models Can Be a Viable Substitute for Expert Political Surveys When a Shock Disrupts Traditional Measurement Approaches
Patrick Y. Wu
210
0
0
06 Jun 2025
MetaFaith: Faithful Natural Language Uncertainty Expression in LLMs
MetaFaith: Faithful Natural Language Uncertainty Expression in LLMs
Gabrielle Kaili-May Liu
Gal Yona
Avi Caciularu
Idan Szpektor
Tim G. J. Rudner
Arman Cohan
379
4
0
30 May 2025
Explaining Sources of Uncertainty in Automated Fact-Checking
Explaining Sources of Uncertainty in Automated Fact-Checking
Jingyi Sun
Greta Warren
Irina Shklovski
Isabelle Augenstein
285
2
0
23 May 2025
AGENT-X: Adaptive Guideline-based Expert Network for Threshold-free AI-generated teXt detection
AGENT-X: Adaptive Guideline-based Expert Network for Threshold-free AI-generated teXt detection
Jiatao Li
Mao Ye
Cheng Peng
Xunjian Yin
Xiaojun Wan
264
3
0
21 May 2025
Conformal Language Model Reasoning with Coherent Factuality
Conformal Language Model Reasoning with Coherent FactualityInternational Conference on Learning Representations (ICLR), 2025
Maxon Rubin-Toles
Maya Gambhir
Keshav Ramji
Aaron Roth
Surbhi Goel
HILMLRM
420
9
0
21 May 2025
Creating General User Models from Computer Use
Creating General User Models from Computer UseACM Symposium on User Interface Software and Technology (UIST), 2025
Omar Shaikh
Shardul Sapkota
Shan Rizvi
Eric Horvitz
Joon Sung Park
Diyi Yang
Michael S. Bernstein
HAI
624
48
0
16 May 2025
Always Tell Me The Odds: Fine-grained Conditional Probability Estimation
Always Tell Me The Odds: Fine-grained Conditional Probability Estimation
Liaoyaqi Wang
Zhengping Jiang
Anqi Liu
Benjamin Van Durme
498
5
0
02 May 2025
Bi-directional Model Cascading with Proxy Confidence
Bi-directional Model Cascading with Proxy Confidence
David Warren
Mark Dras
337
2
0
27 Apr 2025
Comparing Uncertainty Measurement and Mitigation Methods for Large Language Models: A Systematic Review
Comparing Uncertainty Measurement and Mitigation Methods for Large Language Models: A Systematic Review
Toghrul Abbasli
Kentaroh Toyoda
Yuan Wang
Leon Witt
Muhammad Asif Ali
Yukai Miao
Dan Li
Qingsong Wei
UQCVHILM
695
2
0
25 Apr 2025
Gauging Overprecision in LLMs: An Empirical Study
Gauging Overprecision in LLMs: An Empirical Study
Adil Bahaj
Hamed Rahimi
Mohamed Chetouani
Mounir Ghogho
369
0
0
16 Apr 2025
Reasoning Models Know When They're Right: Probing Hidden States for Self-Verification
Reasoning Models Know When They're Right: Probing Hidden States for Self-Verification
Anqi Zhang
Yulin Chen
Jane Pan
Chen Zhao
Aurojit Panda
Jinyang Li
He He
ReLMLRM
401
106
0
07 Apr 2025
A Desideratum for Conversational Agents: Capabilities, Challenges, and Future Directions
A Desideratum for Conversational Agents: Capabilities, Challenges, and Future Directions
Emre Can Acikgoz
Cheng Qian
Hongru Wang
Vardhan Dongre
Xiusi Chen
Heng Ji
Dilek Hakkani-Tur
Gokhan Tur
LM&RoELM
574
7
0
07 Apr 2025
Uncertainty Quantification and Confidence Calibration in Large Language Models: A Survey
Uncertainty Quantification and Confidence Calibration in Large Language Models: A Survey
Xiaoou Liu
Tiejin Chen
Longchao Da
Chacha Chen
Zhen Lin
Hua Wei
HILM
616
67
0
20 Mar 2025
Uncertainty Distillation: Teaching Language Models to Express Semantic Confidence
Uncertainty Distillation: Teaching Language Models to Express Semantic Confidence
Sophia Hager
David Mueller
Kevin Duh
Nicholas Andrews
581
8
0
18 Mar 2025
Don't lie to your friends: Learning what you know from collaborative self-play
Don't lie to your friends: Learning what you know from collaborative self-play
Jacob Eisenstein
Reza Aghajani
Adam Fisch
Dheeru Dua
Fantine Huot
Mirella Lapata
Vicky Zayats
Jonathan Berant
510
9
0
18 Mar 2025
Calibrating Verbal Uncertainty as a Linear Feature to Reduce Hallucinations
Calibrating Verbal Uncertainty as a Linear Feature to Reduce Hallucinations
Ziwei Ji
L. Yu
Yeskendir Koishekenov
Yejin Bang
Anthony Hartshorn
Alan Schelten
Cheng Zhang
Pascale Fung
Nicola Cancedda
608
30
0
18 Mar 2025
Uncertainty in Action: Confidence Elicitation in Embodied Agents
Uncertainty in Action: Confidence Elicitation in Embodied Agents
Tianjiao Yu
Vedant Shah
Muntasir Wahed
Kiet A. Nguyen
Adheesh Sunil Juvekar
Tal August
Ismini Lourentzou
530
3
0
13 Mar 2025
Rewarding Doubt: A Reinforcement Learning Approach to Calibrated Confidence Expression of Large Language Models
Rewarding Doubt: A Reinforcement Learning Approach to Calibrated Confidence Expression of Large Language Models
Paul Stangel
David Bani-Harouni
Chantal Pellegrini
Ege Özsoy
Kamilia Zaripova
Matthias Keicher
Nassir Navab
465
1
0
04 Mar 2025
Conformal Linguistic Calibration: Trading-off between Factuality and Specificity
Conformal Linguistic Calibration: Trading-off between Factuality and Specificity
Zhengping Jiang
Anqi Liu
Benjamin Van Durme
671
11
0
26 Feb 2025
Gatekeeper: Improving Model Cascades Through Confidence Tuning
Gatekeeper: Improving Model Cascades Through Confidence Tuning
Stephan Rabanser
Nathalie Rauschmayr
Achin Kulshrestha
Petra Poklukar
Wittawat Jitkrittum
Sean Augenstein
Congchao Wang
Federico Tombari
558
1
0
26 Feb 2025
Uncertainty Quantification in Retrieval Augmented Question Answering
Uncertainty Quantification in Retrieval Augmented Question Answering
Laura Perez-Beltrachini
Mirella Lapata
RALM
627
5
0
25 Feb 2025
Verify when Uncertain: Beyond Self-Consistency in Black Box Hallucination Detection
Verify when Uncertain: Beyond Self-Consistency in Black Box Hallucination Detection
Yihao Xue
Kristjan Greenewald
Youssef Mroueh
Baharan Mirzasoleiman
HILM
409
11
0
20 Feb 2025
Mind the Confidence Gap: Overconfidence, Calibration, and Distractor Effects in Large Language Models
Mind the Confidence Gap: Overconfidence, Calibration, and Distractor Effects in Large Language Models
Prateek Chhikara
752
29
0
16 Feb 2025
Reliable Text-to-SQL with Adaptive Abstention
Reliable Text-to-SQL with Adaptive Abstention
Kaiwen Chen
Yueting Chen
Xiaohui Yu
Nick Koudas
RALM
382
16
0
18 Jan 2025
Software Engineering and Foundation Models: Insights from Industry Blogs Using a Jury of Foundation Models
Software Engineering and Foundation Models: Insights from Industry Blogs Using a Jury of Foundation Models
Hao Li
Cor-Paul Bezemer
Ahmed E. Hassan
364
10
0
08 Jan 2025
UAlign: Leveraging Uncertainty Estimations for Factuality Alignment on Large Language Models
UAlign: Leveraging Uncertainty Estimations for Factuality Alignment on Large Language ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2024
Boyang Xue
Fei Mi
Qi Zhu
Hongru Wang
Rui Wang
Sheng Wang
Erxin Yu
Xuming Hu
Kam-Fai Wong
HILM
606
9
0
16 Dec 2024
123
Next
Page 1 of 3