ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.20999
  4. Cited By
MoFO: Momentum-Filtered Optimizer for Mitigating Forgetting in LLM Fine-Tuning

MoFO: Momentum-Filtered Optimizer for Mitigating Forgetting in LLM Fine-Tuning

30 July 2024
Yupeng Chen
Senmiao Wang
Zhihang Lin
Zhihang Lin
Yushun Zhang
Tian Ding
Ruoyu Sun
Ruoyu Sun
    CLL
ArXivPDFHTML

Papers citing "MoFO: Momentum-Filtered Optimizer for Mitigating Forgetting in LLM Fine-Tuning"

4 / 4 papers shown
Title
LeanAgent: Lifelong Learning for Formal Theorem Proving
LeanAgent: Lifelong Learning for Formal Theorem Proving
Adarsh Kumarappan
Mo Tiwari
Peiyang Song
Robert Joseph George
Chaowei Xiao
Anima Anandkumar
CLL
LLMAG
LRM
47
8
0
08 Oct 2024
Orthogonal Subspace Learning for Language Model Continual Learning
Orthogonal Subspace Learning for Language Model Continual Learning
Xiao Wang
Tianze Chen
Qiming Ge
Han Xia
Rong Bao
Rui Zheng
Qi Zhang
Tao Gui
Xuanjing Huang
CLL
110
85
0
22 Oct 2023
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
The Pile: An 800GB Dataset of Diverse Text for Language Modeling
The Pile: An 800GB Dataset of Diverse Text for Language Modeling
Leo Gao
Stella Biderman
Sid Black
Laurence Golding
Travis Hoppe
...
Horace He
Anish Thite
Noa Nabeshima
Shawn Presser
Connor Leahy
AIMat
236
1,508
0
31 Dec 2020
1