ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.17792
  4. Cited By
$H^3$Fusion: Helpful, Harmless, Honest Fusion of Aligned LLMs

H3H^3H3Fusion: Helpful, Harmless, Honest Fusion of Aligned LLMs

26 November 2024
Selim Furkan Tekin
Fatih Ilhan
Tiansheng Huang
Sihao Hu
Zachary Yahn
Ling Liu
    MoMe
ArXivPDFHTML

Papers citing "$H^3$Fusion: Helpful, Harmless, Honest Fusion of Aligned LLMs"

1 / 1 papers shown
Title
Safety Tax: Safety Alignment Makes Your Large Reasoning Models Less Reasonable
Tiansheng Huang
Sihao Hu
Fatih Ilhan
Selim Furkan Tekin
Zachary Yahn
Yichang Xu
Ling Liu
53
8
0
01 Mar 2025
1