ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2507.21652
  4. Cited By
UnsafeChain: Enhancing Reasoning Model Safety via Hard Cases
v1v2 (latest)

UnsafeChain: Enhancing Reasoning Model Safety via Hard Cases

29 July 2025
Raj Vardhan Tomar
Preslav Nakov
Yuxia Wang
    LRM
ArXiv (abs)PDFHTMLGithub (3★)

Papers citing "UnsafeChain: Enhancing Reasoning Model Safety via Hard Cases"

1 / 1 papers shown
AdvChain: Adversarial Chain-of-Thought Tuning for Robust Safety Alignment of Large Reasoning Models
AdvChain: Adversarial Chain-of-Thought Tuning for Robust Safety Alignment of Large Reasoning Models
Zihao Zhu
Xinyu Wu
Gehan Hu
Siwei Lyu
Ke Xu
Baoyuan Wu
LRM
99
1
0
29 Sep 2025
1
Page 1 of 1