ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.15483
  4. Cited By
Mitigating Forgetting in LLM Supervised Fine-Tuning and Preference Learning

Mitigating Forgetting in LLM Supervised Fine-Tuning and Preference Learning

20 October 2024
H. Fernando
Han Shen
Parikshit Ram
Yi Zhou
Horst Samulowitz
Nathalie Baracaldo
Tianyi Chen
    CLL
ArXivPDFHTML

Papers citing "Mitigating Forgetting in LLM Supervised Fine-Tuning and Preference Learning"

2 / 2 papers shown
Title
Towards Widening The Distillation Bottleneck for Reasoning Models
Huifeng Yin
Yu Zhao
M. Wu
Xuanfan Ni
Bo Zeng
...
Liangying Shao
Chenyang Lyu
Longyue Wang
Weihua Luo
Kaifu Zhang
LRM
34
1
0
03 Mar 2025
Lazy Safety Alignment for Large Language Models against Harmful
  Fine-tuning
Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning
Tiansheng Huang
Sihao Hu
Fatih Ilhan
Selim Furkan Tekin
Ling Liu
38
23
0
28 May 2024
1