Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2404.17287
Cited By
When to Trust LLMs: Aligning Confidence with Response Quality
26 April 2024
Shuchang Tao
Liuyi Yao
Hanxing Ding
Yuexiang Xie
Qi Cao
Fei Sun
Jinyang Gao
Huawei Shen
Bolin Ding
Re-assign community
ArXiv
PDF
HTML
Papers citing
"When to Trust LLMs: Aligning Confidence with Response Quality"
5 / 5 papers shown
Title
Rewarding Doubt: A Reinforcement Learning Approach to Confidence Calibration of Large Language Models
Paul Stangel
D. Bani-Harouni
Chantal Pellegrini
Ege Ozsoy
Kamilia Zaripova
Matthias Keicher
Nassir Navab
29
1
0
04 Mar 2025
Enhancing LLM Reliability via Explicit Knowledge Boundary Modeling
Hang Zheng
Hongshen Xu
Yuncong Liu
Lu Chen
Pascale Fung
Kai Yu
80
2
0
04 Mar 2025
Adaptive Retrieval Without Self-Knowledge? Bringing Uncertainty Back Home
Viktor Moskvoretskii
M. Lysyuk
Mikhail Salnikov
Nikolay Ivanov
Sergey Pletenev
Daria Galimzianova
Nikita Krayko
Vasily Konovalov
Irina Nikishina
Alexander Panchenko
RALM
66
4
0
24 Feb 2025
Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection
Akari Asai
Zeqiu Wu
Yizhong Wang
Avirup Sil
Hannaneh Hajishirzi
RALM
144
600
0
17 Oct 2023
Fine-Tuning Language Models from Human Preferences
Daniel M. Ziegler
Nisan Stiennon
Jeff Wu
Tom B. Brown
Alec Radford
Dario Amodei
Paul Christiano
G. Irving
ALM
275
1,561
0
18 Sep 2019
1