ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2510.00938
  4. Cited By
Large Reasoning Models Learn Better Alignment from Flawed Thinking

Large Reasoning Models Learn Better Alignment from Flawed Thinking

1 October 2025
ShengYun Peng
Eric Michael Smith
Ivan Evtimov
Song Jiang
Pin-Yu Chen
Hongyuan Zhan
Haozhu Wang
Duen Horng Chau
Mahesh Pasupuleti
Jianfeng Chi
    OffRLLRM
ArXiv (abs)PDFHTMLHuggingFace (46 upvotes)

Papers citing "Large Reasoning Models Learn Better Alignment from Flawed Thinking"

1 / 1 papers shown
Title
Shape it Up! Restoring LLM Safety during Finetuning
Shape it Up! Restoring LLM Safety during Finetuning
ShengYun Peng
Pin-Yu Chen
Jianfeng Chi
Seongmin Lee
Duen Horng Chau
118
3
0
22 May 2025
1