Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2402.04957
Cited By
Reconfidencing LLMs from the Grouping Loss Perspective
7 February 2024
Lihu Chen
Alexandre Perez-Lebel
Fabian M. Suchanek
Gaël Varoquaux
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Reconfidencing LLMs from the Grouping Loss Perspective"
5 / 5 papers shown
Title
Taming Overconfidence in LLMs: Reward Calibration in RLHF
Jixuan Leng
Chengsong Huang
Banghua Zhu
Jiaxin Huang
26
7
0
13 Oct 2024
The Internal State of an LLM Knows When It's Lying
A. Azaria
Tom Michael Mitchell
HILM
216
299
0
26 Apr 2023
Uncertainty Quantification with Pre-trained Language Models: A Large-Scale Empirical Analysis
Yuxin Xiao
Paul Pu Liang
Umang Bhatt
W. Neiswanger
Ruslan Salakhutdinov
Louis-Philippe Morency
173
86
0
10 Oct 2022
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
308
11,909
0
04 Mar 2022
Reducing conversational agents' overconfidence through linguistic calibration
Sabrina J. Mielke
Arthur Szlam
Emily Dinan
Y-Lan Boureau
209
153
0
30 Dec 2020
1