Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2401.15449
Cited By
Learning to Trust Your Feelings: Leveraging Self-awareness in LLMs for Hallucination Mitigation
27 January 2024
Yuxin Liang
Zhuoyang Song
Hao Wang
Jiaxing Zhang
HILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Learning to Trust Your Feelings: Leveraging Self-awareness in LLMs for Hallucination Mitigation"
23 / 23 papers shown
Title
MAC-Tuning: LLM Multi-Compositional Problem Reasoning with Enhanced Knowledge Boundary Awareness
Junsheng Huang
Zhitao He
Sandeep Polisetty
Q. Wang
May Fung
KELM
45
0
0
30 Apr 2025
AI Awareness
X. Li
Haoyuan Shi
Rongwu Xu
Wei Xu
54
0
0
25 Apr 2025
Block Toeplitz Sparse Precision Matrix Estimation for Large-Scale Interval-Valued Time Series Forecasting
Wan Tian
Zhongfeng Qin
AI4TS
31
0
0
04 Apr 2025
The Illusionist's Prompt: Exposing the Factual Vulnerabilities of Large Language Models with Linguistic Nuances
Yining Wang
Y. Wang
Xi Li
Mi Zhang
Geng Hong
Min Yang
AAML
HILM
60
0
0
01 Apr 2025
Entropy-based Exploration Conduction for Multi-step Reasoning
Jinghan Zhang
Xiting Wang
Fengran Mo
Yeyang Zhou
Wanfu Gao
Kunpeng Liu
LRM
50
1
0
20 Mar 2025
DeLTa: A Decoding Strategy based on Logit Trajectory Prediction Improves Factuality and Reasoning Ability
Yunzhen He
Yusuke Takase
Yoichi Ishibashi
Hidetoshi Shimodaira
37
0
0
04 Mar 2025
UAlign: Leveraging Uncertainty Estimations for Factuality Alignment on Large Language Models
Boyang Xue
Fei Mi
Qi Zhu
Hongru Wang
Rui Wang
Sheng Wang
Erxin Yu
Xuming Hu
Kam-Fai Wong
HILM
69
0
0
16 Dec 2024
SEER: Self-Aligned Evidence Extraction for Retrieval-Augmented Generation
Xinping Zhao
Dongfang Li
Yan Zhong
Boren Hu
Yibin Chen
Baotian Hu
Min Zhang
21
1
0
15 Oct 2024
Varying Shades of Wrong: Aligning LLMs with Wrong Answers Only
Jihan Yao
Wenxuan Ding
Shangbin Feng
Lucy Lu Wang
Yulia Tsvetkov
25
0
0
14 Oct 2024
CLUE: Concept-Level Uncertainty Estimation for Large Language Models
Yu-Hsiang Wang
Andrew Bai
Che-Ping Tsai
Cho-Jui Hsieh
LRM
27
0
0
04 Sep 2024
Know Your Limits: A Survey of Abstention in Large Language Models
Bingbing Wen
Jihan Yao
Shangbin Feng
Chenjun Xu
Yulia Tsvetkov
Bill Howe
Lucy Lu Wang
49
5
0
25 Jul 2024
Evaluating and Enhancing Trustworthiness of LLMs in Perception Tasks
Yang You
Jiaqi Han
Yinan Yu
Christian Berger
29
2
0
18 Jul 2024
Leveraging Graph Structures to Detect Hallucinations in Large Language Models
Noa Nonkes
Sergei Agaronian
Evangelos Kanoulas
Roxana Petcu
24
1
0
05 Jul 2024
BeHonest: Benchmarking Honesty in Large Language Models
Steffi Chern
Zhulin Hu
Yuqing Yang
Ethan Chern
Yuan Guo
Jiahe Jin
Binjie Wang
Pengfei Liu
HILM
ALM
81
3
0
19 Jun 2024
Self-training Large Language Models through Knowledge Detection
Wei Jie Yeo
Teddy Ferdinan
Przemyslaw Kazienko
Ranjan Satapathy
Erik Cambria
33
9
0
17 Jun 2024
A Survey of Language-Based Communication in Robotics
William Hunt
Sarvapali D. Ramchurn
Mohammad D. Soorati
LM&Ro
47
12
0
06 Jun 2024
SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales
Tianyang Xu
Shujin Wu
Shizhe Diao
Xiaoze Liu
Xingyao Wang
Yangyi Chen
Jing Gao
LRM
29
27
0
31 May 2024
Evaluating Consistency and Reasoning Capabilities of Large Language Models
Yash Saxena
Sarthak Chopra
Arunendra Mani Tripathi
ELM
LRM
30
5
0
25 Apr 2024
Into the Unknown: Self-Learning Large Language Models
Teddy Ferdinan
Jan Kocoñ
P. Kazienko
20
2
0
14 Feb 2024
Building Guardrails for Large Language Models
Yizhen Dong
Ronghui Mu
Gao Jin
Yi Qi
Jinwei Hu
Xingyu Zhao
Jie Meng
Wenjie Ruan
Xiaowei Huang
OffRL
57
23
0
02 Feb 2024
How Language Model Hallucinations Can Snowball
Muru Zhang
Ofir Press
William Merrill
Alisa Liu
Noah A. Smith
HILM
LRM
78
246
0
22 May 2023
The Internal State of an LLM Knows When It's Lying
A. Azaria
Tom Michael Mitchell
HILM
216
297
0
26 Apr 2023
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
303
11,730
0
04 Mar 2022
1