Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2508.08139
Cited By
v1
v2 (latest)
Can LLMs Detect Their Confabulations? Estimating Reliability in Uncertainty-Aware Language Models
11 August 2025
Tianyi Zhou
Johanne Medina
Sanjay Chawla
HILM
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Can LLMs Detect Their Confabulations? Estimating Reliability in Uncertainty-Aware Language Models"
0 / 0 papers shown
Title
No papers found