ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.00875
  4. Cited By
MoE-CT: A Novel Approach For Large Language Models Training With
  Resistance To Catastrophic Forgetting

MoE-CT: A Novel Approach For Large Language Models Training With Resistance To Catastrophic Forgetting

25 June 2024
Tianhao Li
Shangjie Li
Binbin Xie
Deyi Xiong
Baosong Yang
    CLL
ArXivPDFHTML

Papers citing "MoE-CT: A Novel Approach For Large Language Models Training With Resistance To Catastrophic Forgetting"

4 / 4 papers shown
Title
Neutral residues: revisiting adapters for model extension
Neutral residues: revisiting adapters for model extension
Franck Signe Talla
Hervé Jégou
Edouard Grave
25
0
0
03 Oct 2024
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
308
11,909
0
04 Mar 2022
Beyond Distillation: Task-level Mixture-of-Experts for Efficient
  Inference
Beyond Distillation: Task-level Mixture-of-Experts for Efficient Inference
Sneha Kudugunta
Yanping Huang
Ankur Bapna
M. Krikun
Dmitry Lepikhin
Minh-Thang Luong
Orhan Firat
MoE
119
106
0
24 Sep 2021
Google's Neural Machine Translation System: Bridging the Gap between
  Human and Machine Translation
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu
M. Schuster
Z. Chen
Quoc V. Le
Mohammad Norouzi
...
Alex Rudnick
Oriol Vinyals
G. Corrado
Macduff Hughes
J. Dean
AIMat
716
6,743
0
26 Sep 2016
1