Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2507.21652
Cited By
v1
v2 (latest)
UnsafeChain: Enhancing Reasoning Model Safety via Hard Cases
29 July 2025
Raj Vardhan Tomar
Preslav Nakov
Yuxia Wang
LRM
Re-assign community
ArXiv (abs)
PDF
HTML
Github (3★)
Papers citing
"UnsafeChain: Enhancing Reasoning Model Safety via Hard Cases"
1 / 1 papers shown
AdvChain: Adversarial Chain-of-Thought Tuning for Robust Safety Alignment of Large Reasoning Models
Zihao Zhu
Xinyu Wu
Gehan Hu
Siwei Lyu
Ke Xu
Baoyuan Wu
LRM
99
1
0
29 Sep 2025
1
Page 1 of 1