Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2404.10960
Cited By
Uncertainty-Based Abstention in LLMs Improves Safety and Reduces Hallucinations
16 April 2024
Christian Tomani
Kamalika Chaudhuri
Ivan Evtimov
Daniel Cremers
Mark Ibrahim
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Uncertainty-Based Abstention in LLMs Improves Safety and Reduces Hallucinations"
6 / 6 papers shown
Title
Calibrating Verbal Uncertainty as a Linear Feature to Reduce Hallucinations
Ziwei Ji
L. Yu
Yeskendir Koishekenov
Yejin Bang
Anthony Hartshorn
Alan Schelten
Cheng Zhang
Pascale Fung
Nicola Cancedda
49
1
0
18 Mar 2025
R2-KG: General-Purpose Dual-Agent Framework for Reliable Reasoning on Knowledge Graphs
Sumin Jo
Junseong Choi
Jiho Kim
E. Choi
41
0
0
18 Feb 2025
Cognitively Inspired Energy-Based World Models
Alexi Gladstone
Ganesh Nanduru
Md. Mofijul Islam
Aman Chadha
Jundong Li
Tariq Iqbal
31
0
0
13 Jun 2024
Investigating Uncertainty Calibration of Aligned Language Models under the Multiple-Choice Setting
Guande He
Peng Cui
Jianfei Chen
Wenbo Hu
Jun Zhu
47
11
0
18 Oct 2023
Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned
Deep Ganguli
Liane Lovitt
John Kernion
Amanda Askell
Yuntao Bai
...
Nicholas Joseph
Sam McCandlish
C. Olah
Jared Kaplan
Jack Clark
225
443
0
23 Aug 2022
Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies
Mor Geva
Daniel Khashabi
Elad Segal
Tushar Khot
Dan Roth
Jonathan Berant
RALM
250
672
0
06 Jan 2021
1