ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.09522
  4. Cited By
MT-PATCHER: Selective and Extendable Knowledge Distillation from Large
  Language Models for Machine Translation

MT-PATCHER: Selective and Extendable Knowledge Distillation from Large Language Models for Machine Translation

14 March 2024
Jiahuan Li
Shanbo Cheng
Shujian Huang
Jiajun Chen
ArXivPDFHTML

Papers citing "MT-PATCHER: Selective and Extendable Knowledge Distillation from Large Language Models for Machine Translation"

2 / 2 papers shown
Title
Distilling Step-by-Step! Outperforming Larger Language Models with Less
  Training Data and Smaller Model Sizes
Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes
Lokesh Nagalapatti
Chun-Liang Li
Chih-Kuan Yeh
Hootan Nakhost
Yasuhisa Fujii
Alexander Ratner
Ranjay Krishna
Chen-Yu Lee
Tomas Pfister
ALM
206
499
0
03 May 2023
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
226
4,453
0
23 Jan 2020
1