Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2402.03563
Cited By
v1
v2 (latest)
Distinguishing the Knowable from the Unknowable with Language Models
International Conference on Machine Learning (ICML), 2024
5 February 2024
Gustaf Ahdritz
Tian Qin
Nikhil Vyas
Boaz Barak
Benjamin L. Edelman
Re-assign community
ArXiv (abs)
PDF
HTML
Github (11★)
Papers citing
"Distinguishing the Knowable from the Unknowable with Language Models"
13 / 13 papers shown
Are LLMs Good Safety Agents or a Propaganda Engine?
Neemesh Yadav
Francesco Ortu
Jiarui Liu
Joeun Yook
Bernhard Schölkopf
Rada Mihalcea
Alberto Cazzaniga
Zhijing Jin
121
0
0
28 Nov 2025
Efficient semantic uncertainty quantification in language models via diversity-steered sampling
Ji Won Park
K. Cho
176
0
0
24 Oct 2025
The Role of Model Confidence on Bias Effects in Measured Uncertainties for Vision-Language Models
Xinyi Liu
Weiguang Wang
Hangfeng He
308
0
0
20 Jun 2025
LPASS: Linear Probes as Stepping Stones for vulnerability detection using compressed LLMs
Journal of Information Security and Applications (JISA), 2025
Luis Ibanez-Lissen
Lorena Gonzalez-Manzano
José Maria De Fuentes
Nicolas Anciaux
164
3
0
30 May 2025
A Graph Perspective to Probe Structural Patterns of Knowledge in Large Language Models
Utkarsh Sahu
Zhisheng Qi
Y. Lei
Ryan Rossi
Franck Dernoncourt
Nesreen K. Ahmed
M. Halappanavar
Yao Ma
Yu Wang
353
0
0
25 May 2025
Conformal Language Model Reasoning with Coherent Factuality
International Conference on Learning Representations (ICLR), 2025
Maxon Rubin-Toles
Maya Gambhir
Keshav Ramji
Aaron Roth
Surbhi Goel
HILM
LRM
420
9
0
21 May 2025
A Survey of Uncertainty Estimation Methods on Large Language Models
Annual Meeting of the Association for Computational Linguistics (ACL), 2025
Zhiqiu Xia
Jinxuan Xu
Yuqian Zhang
Hang Liu
511
34
0
28 Feb 2025
A Survey on Uncertainty Quantification of Large Language Models: Taxonomy, Open Research Challenges, and Future Directions
ACM Computing Surveys (ACM CSUR), 2024
Ola Shorinwa
Zhiting Mei
Justin Lidard
Allen Z. Ren
Anirudha Majumdar
HILM
LRM
527
19
0
07 Dec 2024
Do LLMs "know" internally when they follow instructions?
International Conference on Learning Representations (ICLR), 2024
Juyeon Heo
Christina Heinze-Deml
Oussama Elachqar
Shirley Ren
Udhay Nallasamy
Andy Miller
Kwan Ho Ryan Chan
Jaya Narain
508
28
0
18 Oct 2024
Do LLMs estimate uncertainty well in instruction-following?
International Conference on Learning Representations (ICLR), 2024
Juyeon Heo
Miao Xiong
Christina Heinze-Deml
Jaya Narain
ELM
464
18
0
18 Oct 2024
MAQA: Evaluating Uncertainty Quantification in LLMs Regarding Data Uncertainty
North American Chapter of the Association for Computational Linguistics (NAACL), 2024
Yongjin Yang
Haneul Yoo
Hwaran Lee
507
24
0
13 Aug 2024
Estimating the Hallucination Rate of Generative AI
Andrew Jesson
Nicolas Beltran-Velez
Quentin Chu
Sweta Karlekar
Jannik Kossen
Yarin Gal
John P. Cunningham
David M. Blei
594
35
0
11 Jun 2024
Uncertainty Estimation and Quantification for LLMs: A Simple Supervised Approach
Linyu Liu
Yu Pan
Xiaocheng Li
Guanting Chen
428
85
0
24 Apr 2024
1
Page 1 of 1