ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.09674
  4. Cited By
Adaptive Distillation: Aggregating Knowledge from Multiple Paths for
  Efficient Distillation

Adaptive Distillation: Aggregating Knowledge from Multiple Paths for Efficient Distillation

19 October 2021
Sumanth Chennupati
Mohammad Mahdi Kamani
Zhongwei Cheng
Lin Chen
ArXivPDFHTML

Papers citing "Adaptive Distillation: Aggregating Knowledge from Multiple Paths for Efficient Distillation"

6 / 6 papers shown
Title
Feature Alignment and Representation Transfer in Knowledge Distillation for Large Language Models
Feature Alignment and Representation Transfer in Knowledge Distillation for Large Language Models
Junjie Yang
Junhao Song
Xudong Han
Ziqian Bi
Tianyang Wang
...
Y. Zhang
Qian Niu
Benji Peng
Keyu Chen
Ming Liu
VLM
40
0
0
18 Apr 2025
KnFu: Effective Knowledge Fusion
KnFu: Effective Knowledge Fusion
Seyed Jamal Seyed-Mohammadi
Kawa Atapour
J. Abouei
Arash Mohammadi
FedML
14
2
0
18 Mar 2024
Stochastic Multiple Target Sampling Gradient Descent
Stochastic Multiple Target Sampling Gradient Descent
Hoang Phan
Ngoc N. Tran
Trung Le
Toan M. Tran
Nhat Ho
Dinh Q. Phung
11
14
0
04 Jun 2022
What is the State of Neural Network Pruning?
What is the State of Neural Network Pruning?
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
172
1,018
0
06 Mar 2020
Bilevel Programming for Hyperparameter Optimization and Meta-Learning
Bilevel Programming for Hyperparameter Optimization and Meta-Learning
Luca Franceschi
P. Frasconi
Saverio Salzo
Riccardo Grazzi
Massimiliano Pontil
96
714
0
13 Jun 2018
Knowledge Distillation by On-the-Fly Native Ensemble
Knowledge Distillation by On-the-Fly Native Ensemble
Xu Lan
Xiatian Zhu
S. Gong
187
436
0
12 Jun 2018
1