46
4

BoT: Breaking Long Thought Processes of o1-like Large Language Models through Backdoor Attack

Abstract

Longer thought, better performance: large language models with deep reasoning capabilities, particularly o1-like models, have demonstrated remarkable performance by generating extensive thought processes during inference. This trade-off reveals a potential vulnerability: adversaries could compromise model performance by forcing immediate responses without thought processes. To this end, in this paper, we introduce a novel attack scenario targeting the long thought processes of o1-like models and propose BoT (Break CoT), which can selectively break intrinsic reasoning mechanisms through backdoor attacks. BoT constructs poisoned datasets with designed triggers and injects backdoor by either supervised fine-tuning or direct preference optimization. When triggered, the model directly generates answers without thought processes, while maintaining normal reasoning capabilities for clean inputs. Extensive experiments on open-source o1-like models, including recent DeepSeek-R1, demonstrate that BoT nearly achieves high attack success rates while maintaining clean accuracy, highlighting the critical safety risk in current models. Furthermore, the relationship between task difficulty and helpfulness reveals a potential application for good, enabling users to customize model behavior based on task complexity. Code is available at \href{this https URL}{this https URL}.

View on arXiv
@article{zhu2025_2502.12202,
  title={ BoT: Breaking Long Thought Processes of o1-like Large Language Models through Backdoor Attack },
  author={ Zihao Zhu and Hongbao Zhang and Mingda Zhang and Ruotong Wang and Guanzong Wu and Ke Xu and Baoyuan Wu },
  journal={arXiv preprint arXiv:2502.12202},
  year={ 2025 }
}
Comments on this paper