Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2505.24630
Cited By
v1
v2 (latest)
Reasoning Models Hallucinate More: Factuality-Aware Reinforcement Learning for Large Reasoning Models
30 May 2025
Junyi Li
Hwee Tou Ng
OffRL
HILM
LRM
Re-assign community
ArXiv (abs)
PDF
HTML
Github (15★)
Papers citing
"Reasoning Models Hallucinate More: Factuality-Aware Reinforcement Learning for Large Reasoning Models"
6 / 6 papers shown
Understanding New-Knowledge-Induced Factual Hallucinations in LLMs: Analysis and Interpretation
Renfei Dang
Peng Hu
Changjiang Gao
Shujian Huang
Min Zhang
Shujian Huang
114
0
0
04 Nov 2025
MR-Align: Meta-Reasoning Informed Factuality Alignment for Large Reasoning Models
Xinming Wang
Jian Xu
Bin Yu
Sheng Lian
Hongzhu Yi
...
Boran Wang
Hongming Yang
Han Hu
Xu-Yao Zhang
Cheng-Lin Liu
HILM
LRM
265
0
0
27 Oct 2025
Position: The Hidden Costs and Measurement Gaps of Reinforcement Learning with Verifiable Rewards
Aaron Tu
Weihao Xuan
Heli Qi
X. Y. Huang
Qingcheng Zeng
...
Amin Saberi
Naoto Yokoya
Jure Leskovec
Yejin Choi
Fang Wu
OffRL
157
3
0
26 Sep 2025
A Comprehensive Survey on Trustworthiness in Reasoning with Large Language Models
Yanbo Wang
Yongcan Yu
Jian Liang
Ran He
HILM
LRM
205
5
0
04 Sep 2025
Learning to Reason for Factuality
Xilun Chen
Ilia Kulikov
Vincent-Pierre Berges
Barlas Oğuz
Rulin Shao
Gargi Ghosh
Jason Weston
Anuj Kumar
OffRL
HILM
LRM
174
6
0
07 Aug 2025
Teaching Large Language Models to Maintain Contextual Faithfulness via Synthetic Tasks and Reinforcement Learning
Shuzheng Si
Haozhe Zhao
Cheng Gao
Yuzhuo Bai
Zhitong Wang
...
Gang Chen
Fanchao Qi
Minjia Zhang
Baobao Chang
Maosong Sun
SyDa
HILM
294
4
0
22 May 2025
1