ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.10081
16
2

RealSafe-R1: Safety-Aligned DeepSeek-R1 without Compromising Reasoning Capability

14 April 2025
Y. Zhang
Zihao Zeng
Dongbai Li
Yao Huang
Zhijie Deng
Yinpeng Dong
    LRM
ArXivPDFHTML
Abstract

Large Reasoning Models (LRMs), such as OpenAI o1 and DeepSeek-R1, have been rapidly progressing and achieving breakthrough performance on complex reasoning tasks such as mathematics and coding. However, the open-source R1 models have raised safety concerns in wide applications, such as the tendency to comply with malicious queries, which greatly impacts the utility of these powerful models in their applications. In this paper, we introduce RealSafe-R1 as safety-aligned versions of DeepSeek-R1 distilled models. To train these models, we construct a dataset of 15k safety-aware reasoning trajectories generated by DeepSeek-R1, under explicit instructions for expected refusal behavior. Both quantitative experiments and qualitative case studies demonstrate the models' improvements, which are shown in their safety guardrails against both harmful queries and jailbreak attacks. Importantly, unlike prior safety alignment efforts that often compromise reasoning performance, our method preserves the models' reasoning capabilities by maintaining the training data within the original distribution of generation. Model weights of RealSafe-R1 are open-source atthis https URL.

View on arXiv
@article{zhang2025_2504.10081,
  title={ RealSafe-R1: Safety-Aligned DeepSeek-R1 without Compromising Reasoning Capability },
  author={ Yichi Zhang and Zihao Zeng and Dongbai Li and Yao Huang and Zhijie Deng and Yinpeng Dong },
  journal={arXiv preprint arXiv:2504.10081},
  year={ 2025 }
}
Comments on this paper