ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2108.05698
  4. Cited By
Preventing Catastrophic Forgetting and Distribution Mismatch in
  Knowledge Distillation via Synthetic Data

Preventing Catastrophic Forgetting and Distribution Mismatch in Knowledge Distillation via Synthetic Data

11 August 2021
Kuluhan Binici
N. Pham
T. Mitra
K. Leman
ArXivPDFHTML

Papers citing "Preventing Catastrophic Forgetting and Distribution Mismatch in Knowledge Distillation via Synthetic Data"

26 / 26 papers shown
Title
ABKD: Pursuing a Proper Allocation of the Probability Mass in Knowledge Distillation via $α$-$β$-Divergence
ABKD: Pursuing a Proper Allocation of the Probability Mass in Knowledge Distillation via ααα-βββ-Divergence
Guanghui Wang
Zhiyong Yang
Z. Wang
Shi Wang
Qianqian Xu
Q. Huang
39
0
0
07 May 2025
CAE-DFKD: Bridging the Transferability Gap in Data-Free Knowledge Distillation
CAE-DFKD: Bridging the Transferability Gap in Data-Free Knowledge Distillation
Zherui Zhang
Changwei Wang
Rongtao Xu
W. Xu
Shibiao Xu
Yu Zhang
Li Guo
39
1
0
30 Apr 2025
Scaling Down Text Encoders of Text-to-Image Diffusion Models
Scaling Down Text Encoders of Text-to-Image Diffusion Models
Lifu Wang
Daqing Liu
Xinchen Liu
Xiaodong He
VLM
38
0
0
25 Mar 2025
Applications of Knowledge Distillation in Remote Sensing: A Survey
Applications of Knowledge Distillation in Remote Sensing: A Survey
Yassine Himeur
N. Aburaed
O. Elharrouss
Iraklis Varlamis
Shadi Atalla
W. Mansoor
Hussain Al Ahmad
45
4
0
18 Sep 2024
CaBaGe: Data-Free Model Extraction using ClAss BAlanced Generator
  Ensemble
CaBaGe: Data-Free Model Extraction using ClAss BAlanced Generator Ensemble
Jonathan Rosenthal
Shanchao Liang
Kevin Zhang
Lin Tan
MIACV
32
0
0
16 Sep 2024
AMD: Automatic Multi-step Distillation of Large-scale Vision Models
AMD: Automatic Multi-step Distillation of Large-scale Vision Models
Cheng Han
Qifan Wang
S. Dianat
Majid Rabbani
Raghuveer M. Rao
Yi Fang
Qiang Guan
Lifu Huang
Dongfang Liu
VLM
33
4
0
05 Jul 2024
Mind the Gap Between Synthetic and Real: Utilizing Transfer Learning to
  Probe the Boundaries of Stable Diffusion Generated Data
Mind the Gap Between Synthetic and Real: Utilizing Transfer Learning to Probe the Boundaries of Stable Diffusion Generated Data
Leonhard Hennicke
C. Adriano
Holger Giese
Jan Mathias Koehler
Lukas Schott
DiffM
50
2
0
06 May 2024
De-confounded Data-free Knowledge Distillation for Handling Distribution
  Shifts
De-confounded Data-free Knowledge Distillation for Handling Distribution Shifts
Yuzheng Wang
Dingkang Yang
Zhaoyu Chen
Yang Liu
Siao Liu
Wenqiang Zhang
Lihua Zhang
Lizhe Qi
32
6
0
28 Mar 2024
Teacher as a Lenient Expert: Teacher-Agnostic Data-Free Knowledge
  Distillation
Teacher as a Lenient Expert: Teacher-Agnostic Data-Free Knowledge Distillation
Hyunjune Shin
Dong-Wan Choi
AAML
24
1
0
18 Feb 2024
Data-Free Hard-Label Robustness Stealing Attack
Data-Free Hard-Label Robustness Stealing Attack
Xiaojian Yuan
Kejiang Chen
Wen Huang
Jie Zhang
Weiming Zhang
Neng H. Yu
AAML
21
5
0
10 Dec 2023
Robustness-Guided Image Synthesis for Data-Free Quantization
Robustness-Guided Image Synthesis for Data-Free Quantization
Jianhong Bai
Yuchen Yang
Huanpeng Chu
Hualiang Wang
Zuo-Qiang Liu
Ruizhe Chen
Xiaoxuan He
Lianrui Mu
Chengfei Cai
Haoji Hu
DiffM
MQ
26
5
0
05 Oct 2023
NAYER: Noisy Layer Data Generation for Efficient and Effective Data-free
  Knowledge Distillation
NAYER: Noisy Layer Data Generation for Efficient and Effective Data-free Knowledge Distillation
Minh-Tuan Tran
Trung Le
Xuan-May Le
Mehrtash Harandi
Quan Hung Tran
Dinh Q. Phung
10
11
0
30 Sep 2023
Memory-Efficient Continual Learning Object Segmentation for Long Video
Memory-Efficient Continual Learning Object Segmentation for Long Video
Amir Nazemi
Mohammad Javad Shafiee
Zahra Gharaee
Paul Fieguth
VOS
CLL
17
0
0
26 Sep 2023
DFRD: Data-Free Robustness Distillation for Heterogeneous Federated
  Learning
DFRD: Data-Free Robustness Distillation for Heterogeneous Federated Learning
Kangyang Luo
Shuai Wang
Y. Fu
Xiang Li
Yunshi Lan
Minghui Gao
FedML
23
23
0
24 Sep 2023
Influence Function Based Second-Order Channel Pruning-Evaluating True
  Loss Changes For Pruning Is Possible Without Retraining
Influence Function Based Second-Order Channel Pruning-Evaluating True Loss Changes For Pruning Is Possible Without Retraining
Hongrong Cheng
Miao Zhang
Javen Qinfeng Shi
AAML
20
1
0
13 Aug 2023
Sampling to Distill: Knowledge Transfer from Open-World Data
Sampling to Distill: Knowledge Transfer from Open-World Data
Yuzheng Wang
Zhaoyu Chen
Jie M. Zhang
Dingkang Yang
Zuhao Ge
Yang Liu
Siao Liu
Yunquan Sun
Wenqiang Zhang
Lizhe Qi
26
9
0
31 Jul 2023
A Comprehensive Survey of Forgetting in Deep Learning Beyond Continual
  Learning
A Comprehensive Survey of Forgetting in Deep Learning Beyond Continual Learning
Zhenyi Wang
Enneng Yang
Li Shen
Heng-Chiao Huang
KELM
MU
29
47
0
16 Jul 2023
Customizing Synthetic Data for Data-Free Student Learning
Customizing Synthetic Data for Data-Free Student Learning
Shiya Luo
Defang Chen
Can Wang
12
2
0
10 Jul 2023
Learning to Retain while Acquiring: Combating Distribution-Shift in
  Adversarial Data-Free Knowledge Distillation
Learning to Retain while Acquiring: Combating Distribution-Shift in Adversarial Data-Free Knowledge Distillation
Gaurav Patel
Konda Reddy Mopuri
Qiang Qiu
21
28
0
28 Feb 2023
Momentum Adversarial Distillation: Handling Large Distribution Shifts in
  Data-Free Knowledge Distillation
Momentum Adversarial Distillation: Handling Large Distribution Shifts in Data-Free Knowledge Distillation
Kien Do
Hung Le
D. Nguyen
Dang Nguyen
Haripriya Harikumar
T. Tran
Santu Rana
Svetha Venkatesh
18
32
0
21 Sep 2022
On-Device Domain Generalization
On-Device Domain Generalization
Kaiyang Zhou
Yuanhan Zhang
Yuhang Zang
Jingkang Yang
Chen Change Loy
Ziwei Liu
OOD
20
6
0
15 Sep 2022
Dynamic Data-Free Knowledge Distillation by Easy-to-Hard Learning
  Strategy
Dynamic Data-Free Knowledge Distillation by Easy-to-Hard Learning Strategy
Jingru Li
Sheng Zhou
Liangcheng Li
Haishuai Wang
Zhi Yu
Jiajun Bu
21
14
0
29 Aug 2022
Teacher Guided Training: An Efficient Framework for Knowledge Transfer
Teacher Guided Training: An Efficient Framework for Knowledge Transfer
Manzil Zaheer
A. S. Rawat
Seungyeon Kim
Chong You
Himanshu Jain
Andreas Veit
Rob Fergus
Surinder Kumar
VLM
16
2
0
14 Aug 2022
Robust and Resource-Efficient Data-Free Knowledge Distillation by
  Generative Pseudo Replay
Robust and Resource-Efficient Data-Free Knowledge Distillation by Generative Pseudo Replay
Kuluhan Binici
Shivam Aggarwal
N. Pham
K. Leman
T. Mitra
TTA
17
45
0
09 Jan 2022
Data-Free Knowledge Transfer: A Survey
Data-Free Knowledge Transfer: A Survey
Yuang Liu
Wei Zhang
Jun Wang
Jianyong Wang
27
48
0
31 Dec 2021
Incremental Class Learning using Variational Autoencoders with
  Similarity Learning
Incremental Class Learning using Variational Autoencoders with Similarity Learning
Jiahao Huo
Terence L van Zyl
CLL
14
1
0
04 Oct 2021
1