Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2303.06480
Cited By
Knowledge Distillation for Efficient Sequences of Training Runs
11 March 2023
Xingyu Liu
A. Leonardi
Lu Yu
Chris Gilmer-Hill
Matthew L. Leavitt
Jonathan Frankle
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Knowledge Distillation for Efficient Sequences of Training Runs"
4 / 4 papers shown
Title
Feature Alignment and Representation Transfer in Knowledge Distillation for Large Language Models
Junjie Yang
Junhao Song
Xudong Han
Ziqian Bi
Tianyang Wang
...
Y. Zhang
Qian Niu
Benji Peng
Keyu Chen
Ming Liu
VLM
47
0
0
18 Apr 2025
TiC-CLIP: Continual Training of CLIP Models
Saurabh Garg
Mehrdad Farajtabar
Hadi Pouransari
Raviteja Vemulapalli
Sachin Mehta
Oncel Tuzel
Vaishaal Shankar
Fartash Faghri
VLM
CLIP
31
26
0
24 Oct 2023
Transferring Learning Trajectories of Neural Networks
Daiki Chijiwa
20
2
0
23 May 2023
Reduce, Reuse, Recycle: Improving Training Efficiency with Distillation
Cody Blakeney
Jessica Zosa Forde
Jonathan Frankle
Ziliang Zong
Matthew L. Leavitt
VLM
22
4
0
01 Nov 2022
1