ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.04957
  4. Cited By
Reconfidencing LLMs from the Grouping Loss Perspective

Reconfidencing LLMs from the Grouping Loss Perspective

7 February 2024
Lihu Chen
Alexandre Perez-Lebel
Fabian M. Suchanek
Gaël Varoquaux
ArXivPDFHTML

Papers citing "Reconfidencing LLMs from the Grouping Loss Perspective"

5 / 5 papers shown
Title
Taming Overconfidence in LLMs: Reward Calibration in RLHF
Taming Overconfidence in LLMs: Reward Calibration in RLHF
Jixuan Leng
Chengsong Huang
Banghua Zhu
Jiaxin Huang
26
7
0
13 Oct 2024
The Internal State of an LLM Knows When It's Lying
The Internal State of an LLM Knows When It's Lying
A. Azaria
Tom Michael Mitchell
HILM
216
299
0
26 Apr 2023
Uncertainty Quantification with Pre-trained Language Models: A
  Large-Scale Empirical Analysis
Uncertainty Quantification with Pre-trained Language Models: A Large-Scale Empirical Analysis
Yuxin Xiao
Paul Pu Liang
Umang Bhatt
W. Neiswanger
Ruslan Salakhutdinov
Louis-Philippe Morency
173
86
0
10 Oct 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
308
11,909
0
04 Mar 2022
Reducing conversational agents' overconfidence through linguistic
  calibration
Reducing conversational agents' overconfidence through linguistic calibration
Sabrina J. Mielke
Arthur Szlam
Emily Dinan
Y-Lan Boureau
209
153
0
30 Dec 2020
1