Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
All Papers
0 / 0 papers shown
Title
Home
Papers
2506.23464
Cited By
v1
v2 (latest)
The Confidence Paradox: Can LLM Know When It's Wrong
30 June 2025
Sahil Tripathi
Md Tabrez Nafis
Imran Hussain
Jiechao Gao
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"The Confidence Paradox: Can LLM Know When It's Wrong"
4 / 4 papers shown
Title
Failure Modes in LLM Systems: A System-Level Taxonomy for Reliable AI Applications
Vaishali Vinay
241
0
0
25 Nov 2025
Rewarding the Journey, Not Just the Destination: A Composite Path and Answer Self-Scoring Reward Mechanism for Test-Time Reinforcement Learning
Chenwei Tang
Jingyu Xing
Xinyu Liu
Wei Ju
Jiancheng Lv
Fan Zhang
Deng Xiong
Ziyue Qiao
LRM
161
0
0
20 Oct 2025
Do LLMs Know They Are Being Tested? Evaluation Awareness and Incentive-Sensitive Failures in GPT-OSS-20B
Nisar Ahmed
Muhammad Imran Zaman
Gulshan Saleem
Ali Hassan
LRM
95
0
0
08 Oct 2025
FalseCrashReducer: Mitigating False Positive Crashes in OSS-Fuzz-Gen Using Agentic AI
Paschal C. Amusuo
Dongge Liu
Ricardo Andres Calvo Mendez
Jonathan Metzman
Oliver Chang
James C. Davis
88
1
0
02 Oct 2025
1