Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2502.12202
Cited By
BoT: Breaking Long Thought Processes of o1-like Large Language Models through Backdoor Attack
16 February 2025
Zihao Zhu
Hongbao Zhang
Mingda Zhang
Ruotong Wang
Guanzong Wu
Ke Xu
Baoyuan Wu
AAML
LRM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"BoT: Breaking Long Thought Processes of o1-like Large Language Models through Backdoor Attack"
3 / 3 papers shown
Title
Safety in Large Reasoning Models: A Survey
Cheng Wang
Y. Liu
B. Li
Duzhen Zhang
Z. Li
Junfeng Fang
LRM
49
1
0
24 Apr 2025
ShadowCoT: Cognitive Hijacking for Stealthy Reasoning Backdoors in LLMs
Gejian Zhao
Hanzhou Wu
Xinpeng Zhang
Athanasios V. Vasilakos
LRM
31
1
0
08 Apr 2025
Safety Tax: Safety Alignment Makes Your Large Reasoning Models Less Reasonable
Tiansheng Huang
Sihao Hu
Fatih Ilhan
Selim Furkan Tekin
Zachary Yahn
Yichang Xu
Ling Liu
48
8
0
01 Mar 2025
1