Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2407.10153
Cited By
Look Within, Why LLMs Hallucinate: A Causal Perspective
14 July 2024
He Li
Haoang Chi
Mingyu Liu
Wenjing Yang
LRM
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Look Within, Why LLMs Hallucinate: A Causal Perspective"
6 / 6 papers shown
Title
Exploring Causal Effect of Social Bias on Faithfulness Hallucinations in Large Language Models
Zhenliang Zhang
Junzhe Zhang
Xinyu Hu
Huixuan Zhang
Xiaojun Wan
HILM
136
0
0
11 Aug 2025
Interpretation Meets Safety: A Survey on Interpretation Methods and Tools for Improving LLM Safety
Seongmin Lee
Aeree Cho
Grace C. Kim
ShengYun Peng
Mansi Phute
Duen Horng Chau
LM&MA
AI4CE
253
3
0
05 Jun 2025
The Tower of Babel Revisited: Multilingual Jailbreak Prompts on Closed-Source Large Language Models
Linghan Huang
Haolin Jin
Zhaoge Bi
Pengyue Yang
Peizhou Zhao
Taozhao Chen
Xiongfei Wu
Lei Ma
Huaming Chen
AAML
193
1
0
18 May 2025
Position: Foundation Models Need Digital Twin Representations
Yiqing Shen
Hao Ding
Lalithkumar Seenivasan
Tianmin Shu
Mathias Unberath
AI4CE
359
8
0
01 May 2025
Hallucination, Monofacts, and Miscalibration: An Empirical Investigation
Miranda Muqing Miao
Michael Kearns
365
2
0
11 Feb 2025
Attention Heads of Large Language Models: A Survey
Patterns (Patterns), 2024
Zifan Zheng
Yezhaohui Wang
Yuxin Huang
Shichao Song
Mingchuan Yang
Bo Tang
Feiyu Xiong
Zhiyu Li
LRM
225
60
0
05 Sep 2024
1