ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2509.24269
  4. Cited By
AdvChain: Adversarial Chain-of-Thought Tuning for Robust Safety Alignment of Large Reasoning Models

AdvChain: Adversarial Chain-of-Thought Tuning for Robust Safety Alignment of Large Reasoning Models

29 September 2025
Zihao Zhu
Xinyu Wu
Gehan Hu
Siwei Lyu
Ke Xu
Baoyuan Wu
    LRM
ArXiv (abs)PDFHTMLHuggingFace (3 upvotes)

Papers citing "AdvChain: Adversarial Chain-of-Thought Tuning for Robust Safety Alignment of Large Reasoning Models"

0 / 0 papers shown

No papers found

Page 1 of 0