ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.06480
  4. Cited By
Knowledge Distillation for Efficient Sequences of Training Runs

Knowledge Distillation for Efficient Sequences of Training Runs

11 March 2023
Xingyu Liu
A. Leonardi
Lu Yu
Chris Gilmer-Hill
Matthew L. Leavitt
Jonathan Frankle
ArXivPDFHTML

Papers citing "Knowledge Distillation for Efficient Sequences of Training Runs"

4 / 4 papers shown
Title
Feature Alignment and Representation Transfer in Knowledge Distillation for Large Language Models
Feature Alignment and Representation Transfer in Knowledge Distillation for Large Language Models
Junjie Yang
Junhao Song
Xudong Han
Ziqian Bi
Tianyang Wang
...
Y. Zhang
Qian Niu
Benji Peng
Keyu Chen
Ming Liu
VLM
47
0
0
18 Apr 2025
TiC-CLIP: Continual Training of CLIP Models
TiC-CLIP: Continual Training of CLIP Models
Saurabh Garg
Mehrdad Farajtabar
Hadi Pouransari
Raviteja Vemulapalli
Sachin Mehta
Oncel Tuzel
Vaishaal Shankar
Fartash Faghri
VLM
CLIP
31
26
0
24 Oct 2023
Transferring Learning Trajectories of Neural Networks
Transferring Learning Trajectories of Neural Networks
Daiki Chijiwa
20
2
0
23 May 2023
Reduce, Reuse, Recycle: Improving Training Efficiency with Distillation
Reduce, Reuse, Recycle: Improving Training Efficiency with Distillation
Cody Blakeney
Jessica Zosa Forde
Jonathan Frankle
Ziliang Zong
Matthew L. Leavitt
VLM
22
4
0
01 Nov 2022
1