ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1710.07535
  4. Cited By
Data-Free Knowledge Distillation for Deep Neural Networks

Data-Free Knowledge Distillation for Deep Neural Networks

19 October 2017
Raphael Gontijo-Lopes
Stefano Fenu
Thad Starner
ArXivPDFHTML

Papers citing "Data-Free Knowledge Distillation for Deep Neural Networks"

40 / 40 papers shown
Title
Simple Semi-supervised Knowledge Distillation from Vision-Language Models via $\mathbf{\texttt{D}}$ual-$\mathbf{\texttt{H}}$ead $\mathbf{\texttt{O}}$ptimization
Simple Semi-supervised Knowledge Distillation from Vision-Language Models via D\mathbf{\texttt{D}}Dual-H\mathbf{\texttt{H}}Head O\mathbf{\texttt{O}}Optimization
Seongjae Kang
Dong Bok Lee
Hyungjoon Jang
Sung Ju Hwang
VLM
57
0
0
12 May 2025
Dataset Distillation via Knowledge Distillation: Towards Efficient Self-Supervised Pre-Training of Deep Networks
Dataset Distillation via Knowledge Distillation: Towards Efficient Self-Supervised Pre-Training of Deep Networks
S. Joshi
Jiayi Ni
Baharan Mirzasoleiman
DD
67
2
0
03 Oct 2024
DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture
DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture
Qianlong Xiang
Miao Zhang
Yuzhang Shang
Jianlong Wu
Yan Yan
Liqiang Nie
DiffM
60
10
0
05 Sep 2024
Teacher-Student Architecture for Knowledge Distillation: A Survey
Teacher-Student Architecture for Knowledge Distillation: A Survey
Chengming Hu
Xuan Li
Danyang Liu
Haolun Wu
Xi Chen
Ju Wang
Xue Liu
21
16
0
08 Aug 2023
Sampling to Distill: Knowledge Transfer from Open-World Data
Sampling to Distill: Knowledge Transfer from Open-World Data
Yuzheng Wang
Zhaoyu Chen
Jie M. Zhang
Dingkang Yang
Zuhao Ge
Yang Liu
Siao Liu
Yunquan Sun
Wenqiang Zhang
Lizhe Qi
26
9
0
31 Jul 2023
Learning to Learn from APIs: Black-Box Data-Free Meta-Learning
Learning to Learn from APIs: Black-Box Data-Free Meta-Learning
Zixuan Hu
Li Shen
Zhenyi Wang
Baoyuan Wu
Chun Yuan
Dacheng Tao
47
7
0
28 May 2023
Is Synthetic Data From Diffusion Models Ready for Knowledge
  Distillation?
Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation?
Zheng Li
Yuxuan Li
Penghai Zhao
Renjie Song
Xiang Li
Jian Yang
29
19
0
22 May 2023
Self-discipline on multiple channels
Self-discipline on multiple channels
Jiutian Zhao
Liangchen Luo
Hao Wang
19
0
0
27 Apr 2023
Architecture, Dataset and Model-Scale Agnostic Data-free Meta-Learning
Architecture, Dataset and Model-Scale Agnostic Data-free Meta-Learning
Zixuan Hu
Li Shen
Zhenyi Wang
Tongliang Liu
Chun Yuan
Dacheng Tao
47
4
0
20 Mar 2023
Dataset Distillation: A Comprehensive Review
Dataset Distillation: A Comprehensive Review
Ruonan Yu
Songhua Liu
Xinchao Wang
DD
39
121
0
17 Jan 2023
Dataless Knowledge Fusion by Merging Weights of Language Models
Dataless Knowledge Fusion by Merging Weights of Language Models
Xisen Jin
Xiang Ren
Daniel Preotiuc-Pietro
Pengxiang Cheng
FedML
MoMe
15
211
0
19 Dec 2022
Scalable Collaborative Learning via Representation Sharing
Scalable Collaborative Learning via Representation Sharing
Frédéric Berdoz
Abhishek Singh
Martin Jaggi
Ramesh Raskar
FedML
22
3
0
20 Nov 2022
Momentum Adversarial Distillation: Handling Large Distribution Shifts in
  Data-Free Knowledge Distillation
Momentum Adversarial Distillation: Handling Large Distribution Shifts in Data-Free Knowledge Distillation
Kien Do
Hung Le
D. Nguyen
Dang Nguyen
Haripriya Harikumar
T. Tran
Santu Rana
Svetha Venkatesh
18
32
0
21 Sep 2022
Factorizing Knowledge in Neural Networks
Factorizing Knowledge in Neural Networks
Xingyi Yang
Jingwen Ye
Xinchao Wang
MoMe
36
121
0
04 Jul 2022
A Model or 603 Exemplars: Towards Memory-Efficient Class-Incremental
  Learning
A Model or 603 Exemplars: Towards Memory-Efficient Class-Incremental Learning
Da-Wei Zhou
Qiwen Wang
Han-Jia Ye
De-Chuan Zhan
19
122
0
26 May 2022
Prompting to Distill: Boosting Data-Free Knowledge Distillation via
  Reinforced Prompt
Prompting to Distill: Boosting Data-Free Knowledge Distillation via Reinforced Prompt
Xinyin Ma
Xinchao Wang
Gongfan Fang
Yongliang Shen
Weiming Lu
13
11
0
16 May 2022
Data-Free Adversarial Knowledge Distillation for Graph Neural Networks
Data-Free Adversarial Knowledge Distillation for Graph Neural Networks
Yu-Lin Zhuang
Lingjuan Lyu
Chuan Shi
Carl Yang
Lichao Sun
27
16
0
08 May 2022
DearKD: Data-Efficient Early Knowledge Distillation for Vision
  Transformers
DearKD: Data-Efficient Early Knowledge Distillation for Vision Transformers
Xianing Chen
Qiong Cao
Yujie Zhong
Jing Zhang
Shenghua Gao
Dacheng Tao
ViT
32
76
0
27 Apr 2022
The Augmented Image Prior: Distilling 1000 Classes by Extrapolating from
  a Single Image
The Augmented Image Prior: Distilling 1000 Classes by Extrapolating from a Single Image
Yuki M. Asano
Aaqib Saeed
30
7
0
01 Dec 2021
Always Be Dreaming: A New Approach for Data-Free Class-Incremental
  Learning
Always Be Dreaming: A New Approach for Data-Free Class-Incremental Learning
James Smith
Yen-Chang Hsu
John C. Balloch
Yilin Shen
Hongxia Jin
Z. Kira
CLL
46
161
0
17 Jun 2021
Data-Free Knowledge Distillation for Heterogeneous Federated Learning
Data-Free Knowledge Distillation for Heterogeneous Federated Learning
Zhuangdi Zhu
Junyuan Hong
Jiayu Zhou
FedML
13
627
0
20 May 2021
Graph-Free Knowledge Distillation for Graph Neural Networks
Graph-Free Knowledge Distillation for Graph Neural Networks
Xiang Deng
Zhongfei Zhang
26
65
0
16 May 2021
Visualizing Adapted Knowledge in Domain Transfer
Visualizing Adapted Knowledge in Domain Transfer
Yunzhong Hou
Liang Zheng
111
54
0
20 Apr 2021
Knowledge Distillation as Semiparametric Inference
Knowledge Distillation as Semiparametric Inference
Tri Dao
G. Kamath
Vasilis Syrgkanis
Lester W. Mackey
22
31
0
20 Apr 2021
Efficient Encrypted Inference on Ensembles of Decision Trees
Efficient Encrypted Inference on Ensembles of Decision Trees
Kanthi Kiran Sarpatwar
Karthik Nandakumar
N. Ratha
J. Rayfield
Karthikeyan Shanmugam
Sharath Pankanti
Roman Vaculin
FedML
14
5
0
05 Mar 2021
Enhancing Data-Free Adversarial Distillation with Activation
  Regularization and Virtual Interpolation
Enhancing Data-Free Adversarial Distillation with Activation Regularization and Virtual Interpolation
Xiaoyang Qu
Jianzong Wang
Jing Xiao
14
14
0
23 Feb 2021
Towards Zero-Shot Knowledge Distillation for Natural Language Processing
Towards Zero-Shot Knowledge Distillation for Natural Language Processing
Ahmad Rashid
Vasileios Lioutas
Abbas Ghaddar
Mehdi Rezagholizadeh
13
27
0
31 Dec 2020
Data-Free Model Extraction
Data-Free Model Extraction
Jean-Baptiste Truong
Pratyush Maini
R. Walls
Nicolas Papernot
MIACV
13
181
0
30 Nov 2020
Learnable Boundary Guided Adversarial Training
Learnable Boundary Guided Adversarial Training
Jiequan Cui
Shu-Lin Liu
Liwei Wang
Jiaya Jia
OOD
AAML
19
124
0
23 Nov 2020
Effectiveness of Arbitrary Transfer Sets for Data-free Knowledge
  Distillation
Effectiveness of Arbitrary Transfer Sets for Data-free Knowledge Distillation
Gaurav Kumar Nayak
Konda Reddy Mopuri
Anirban Chakraborty
14
18
0
18 Nov 2020
Robustness and Diversity Seeking Data-Free Knowledge Distillation
Robustness and Diversity Seeking Data-Free Knowledge Distillation
Pengchao Han
Jihong Park
Shiqiang Wang
Yejun Liu
15
12
0
07 Nov 2020
Dataset Condensation with Gradient Matching
Dataset Condensation with Gradient Matching
Bo-Lu Zhao
Konda Reddy Mopuri
Hakan Bilen
DD
28
472
0
10 Jun 2020
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
19
2,835
0
09 Jun 2020
An Overview of Neural Network Compression
An Overview of Neural Network Compression
James OÑeill
AI4CE
45
98
0
05 Jun 2020
Towards Inheritable Models for Open-Set Domain Adaptation
Towards Inheritable Models for Open-Set Domain Adaptation
Jogendra Nath Kundu
Naveen Venkat
R. Ambareesh
V. RahulM.
R. Venkatesh Babu
VLM
9
117
0
09 Apr 2020
Neural Networks Are More Productive Teachers Than Human Raters: Active
  Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model
Neural Networks Are More Productive Teachers Than Human Raters: Active Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model
Dongdong Wang
Yandong Li
Liqiang Wang
Boqing Gong
16
48
0
31 Mar 2020
Data-Free Knowledge Amalgamation via Group-Stack Dual-GAN
Data-Free Knowledge Amalgamation via Group-Stack Dual-GAN
Jingwen Ye
Yixin Ji
Xinchao Wang
Xin Gao
Mingli Song
24
53
0
20 Mar 2020
And the Bit Goes Down: Revisiting the Quantization of Neural Networks
And the Bit Goes Down: Revisiting the Quantization of Neural Networks
Pierre Stock
Armand Joulin
Rémi Gribonval
Benjamin Graham
Hervé Jégou
MQ
29
149
0
12 Jul 2019
Zero-Shot Knowledge Distillation in Deep Networks
Zero-Shot Knowledge Distillation in Deep Networks
Gaurav Kumar Nayak
Konda Reddy Mopuri
Vaisakh Shaj
R. Venkatesh Babu
Anirban Chakraborty
8
245
0
20 May 2019
SlimNets: An Exploration of Deep Model Compression and Acceleration
SlimNets: An Exploration of Deep Model Compression and Acceleration
Ini Oguntola
Subby Olubeko
Chris Sweeney
8
11
0
01 Aug 2018
1