Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2405.16856
Cited By
Can We Trust LLMs? Mitigate Overconfidence Bias in LLMs through Knowledge Transfer
27 May 2024
Haoyan Yang
Yixuan Wang
Xingyin Xu
Hanyuan Zhang
Yirong Bian
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Can We Trust LLMs? Mitigate Overconfidence Bias in LLMs through Knowledge Transfer"
8 / 8 papers shown
Title
Large Language Models are Unreliable for Cyber Threat Intelligence
Emanuele Mezzi
Fabio Massacci
Katja Tuma
31
0
0
29 Mar 2025
Position: Standard Benchmarks Fail -- LLM Agents Present Overlooked Risks for Financial Applications
Zichen Chen
Jiaao Chen
Jianda Chen
Misha Sra
ELM
36
1
0
21 Feb 2025
Option-ID Based Elimination For Multiple Choice Questions
Zhenhao Zhu
Bulou Liu
Qingyao Ai
Y. Liu
54
0
0
25 Jan 2025
A Survey of Calibration Process for Black-Box LLMs
Liangru Xie
Hui Liu
Jingying Zeng
Xianfeng Tang
Yan Han
Chen Luo
Jing Huang
Zhen Li
Suhang Wang
Qi He
74
1
0
17 Dec 2024
Enhancing Trust in Large Language Models with Uncertainty-Aware Fine-Tuning
R. Krishnan
Piyush Khanna
Omesh Tickoo
HILM
69
1
0
03 Dec 2024
AdaSwitch: Adaptive Switching between Small and Large Agents for Effective Cloud-Local Collaborative Learning
Hao-Lun Sun
Jiayi Wu
Hengyi Cai
Xiaochi Wei
Yue Feng
Bo Wang
S. Wang
Yan Zhang
Dawei Yin
LLMAG
17
2
0
17 Oct 2024
Mitigating Neural Network Overconfidence with Logit Normalization
Hongxin Wei
Renchunzi Xie
Hao-Ran Cheng
Lei Feng
Bo An
Yixuan Li
OODD
163
266
0
19 May 2022
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,943
0
20 Apr 2018
1