Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1912.11006
Cited By
v1
v2
v3 (latest)
Data-Free Adversarial Distillation
23 December 2019
Gongfan Fang
Mingli Song
Chengchao Shen
Xinchao Wang
Da Chen
Xiuming Zhang
Re-assign community
ArXiv (abs)
PDF
HTML
Github (99★)
Papers citing
"Data-Free Adversarial Distillation"
48 / 48 papers shown
Title
Forget the Data and Fine-Tuning! Just Fold the Network to Compress
Dong Wang
Haris Šikić
Lothar Thiele
O. Saukh
152
1
0
17 Feb 2025
Training-Free Restoration of Pruned Neural Networks
Keonho Lee
Minsoo Kim
Dong-Wan Choi
92
0
0
06 Feb 2025
Distilling Vision-Language Foundation Models: A Data-Free Approach via Prompt Diversification
Yunyi Xuan
Weijie Chen
Shicai Yang
Di Xie
Luojun Lin
Yueting Zhuang
VLM
109
4
0
21 Jul 2024
Data-free Knowledge Distillation for Fine-grained Visual Categorization
Renrong Shao
Wei Zhang
Jianhua Yin
Jun Wang
61
3
0
18 Apr 2024
FedDistill: Global Model Distillation for Local Model De-Biasing in Non-IID Federated Learning
Changlin Song
Divya Saxena
Jiannong Cao
Yuqing Zhao
FedML
98
3
0
14 Apr 2024
Causal-DFQ: Causality Guided Data-free Network Quantization
Yuzhang Shang
Bingxin Xu
Gaowen Liu
Ramana Rao Kompella
Yan Yan
MQ
CML
92
8
0
24 Sep 2023
Image Captions are Natural Prompts for Text-to-Image Models
Shiye Lei
Hao Chen
Senyang Zhang
Bo Zhao
Dacheng Tao
VLM
113
23
0
17 Jul 2023
Categories of Response-Based, Feature-Based, and Relation-Based Knowledge Distillation
Chuanguang Yang
Xinqiang Yu
Zhulin An
Yongjun Xu
VLM
OffRL
193
26
0
19 Jun 2023
Lion: Adversarial Distillation of Proprietary Large Language Models
Yuxin Jiang
Chunkit Chan
Yin Hua
Wei Wang
ALM
108
25
0
22 May 2023
Explicit and Implicit Knowledge Distillation via Unlabeled Data
Yuzheng Wang
Zuhao Ge
Zhaoyu Chen
Xiangjian Liu
Chuang Ma
Yunquan Sun
Lizhe Qi
110
10
0
17 Feb 2023
Momentum Adversarial Distillation: Handling Large Distribution Shifts in Data-Free Knowledge Distillation
Kien Do
Hung Le
D. Nguyen
Dang Nguyen
Haripriya Harikumar
T. Tran
Santu Rana
Svetha Venkatesh
68
33
0
21 Sep 2022
Dynamic Data-Free Knowledge Distillation by Easy-to-Hard Learning Strategy
Jingru Li
Sheng Zhou
Liangcheng Li
Haishuai Wang
Zhi Yu
Jiajun Bu
88
14
0
29 Aug 2022
Dense Depth Distillation with Out-of-Distribution Simulated Images
Junjie Hu
Chenyou Fan
Mete Ozay
Hualie Jiang
Tin Lun Lam
78
5
0
26 Aug 2022
MOVE: Effective and Harmless Ownership Verification via Embedded External Features
Yiming Li
Linghui Zhu
Xiaojun Jia
Yang Bai
Yong Jiang
Shutao Xia
Xiaochun Cao
Kui Ren
AAML
103
14
0
04 Aug 2022
CDFKD-MFS: Collaborative Data-free Knowledge Distillation via Multi-level Feature Sharing
Zhiwei Hao
Yong Luo
Zhi Wang
Han Hu
J. An
97
28
0
24 May 2022
Self-distilled Knowledge Delegator for Exemplar-free Class Incremental Learning
Fanfan Ye
Liang Ma
Qiaoyong Zhong
Di Xie
Shiliang Pu
BDL
CLL
45
2
0
23 May 2022
Data-Free Adversarial Knowledge Distillation for Graph Neural Networks
Yu-Lin Zhuang
Lingjuan Lyu
Chuan Shi
Carl Yang
Lichao Sun
63
18
0
08 May 2022
Synthesizing Informative Training Samples with GAN
Bo Zhao
Hakan Bilen
DD
128
78
0
15 Apr 2022
Fine-tuning Global Model via Data-Free Knowledge Distillation for Non-IID Federated Learning
Lin Zhang
Li Shen
Liang Ding
Dacheng Tao
Ling-Yu Duan
FedML
90
268
0
17 Mar 2022
MEGA: Model Stealing via Collaborative Generator-Substitute Networks
Chi Hong
Jiyue Huang
L. Chen
65
2
0
31 Jan 2022
Robust and Resource-Efficient Data-Free Knowledge Distillation by Generative Pseudo Replay
Kuluhan Binici
Shivam Aggarwal
N. Pham
K. Leman
T. Mitra
TTA
110
48
0
09 Jan 2022
Conditional Generative Data-free Knowledge Distillation
Xinyi Yu
Ling Yan
Yang Yang
Libo Zhou
Linlin Ou
76
8
0
31 Dec 2021
Data-Free Knowledge Transfer: A Survey
Yuang Liu
Wei Zhang
Jun Wang
Jianyong Wang
119
48
0
31 Dec 2021
Up to 100
×
\times
×
Faster Data-free Knowledge Distillation
Gongfan Fang
Kanya Mo
Xinchao Wang
Mingli Song
Shitao Bei
Haofei Zhang
Xiuming Zhang
DD
89
4
0
12 Dec 2021
Defending against Model Stealing via Verifying Embedded External Features
Yiming Li
Linghui Zhu
Xiaojun Jia
Yong Jiang
Shutao Xia
Xiaochun Cao
AAML
88
65
0
07 Dec 2021
Source-free unsupervised domain adaptation for cross-modality abdominal multi-organ segmentation
Jin Hong
Yudong Zhang
Weitian Chen
OOD
MedIm
118
86
0
24 Nov 2021
Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples
Kanghyun Choi
Deokki Hong
Noseong Park
Youngsok Kim
Jinho Lee
MQ
71
67
0
04 Nov 2021
Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data
Gongfan Fang
Yifan Bao
Mingli Song
Xinchao Wang
Don Xie
Chengchao Shen
Xiuming Zhang
97
44
0
27 Oct 2021
FedZKT: Zero-Shot Knowledge Transfer towards Resource-Constrained Federated Learning with Heterogeneous On-Device Models
Lan Zhang
Dapeng Wu
Xiaoyong Yuan
FedML
83
49
0
08 Sep 2021
Preventing Catastrophic Forgetting and Distribution Mismatch in Knowledge Distillation via Synthetic Data
Kuluhan Binici
N. Pham
T. Mitra
K. Leman
84
42
0
11 Aug 2021
MEGEX: Data-Free Model Extraction Attack against Gradient-Based Explainable AI
T. Miura
Satoshi Hasegawa
Toshiki Shibahara
SILM
MIACV
81
39
0
19 Jul 2021
AutoReCon: Neural Architecture Search-based Reconstruction for Data-free Compression
Baozhou Zhu
P. Hofstee
J. Peltenburg
Jinho Lee
Zaid Al-Ars
84
23
0
25 May 2021
Contrastive Model Inversion for Data-Free Knowledge Distillation
Gongfan Fang
Mingli Song
Xinchao Wang
Chen Shen
Xingen Wang
Xiuming Zhang
66
82
0
18 May 2021
Dataset Inference: Ownership Resolution in Machine Learning
Pratyush Maini
Mohammad Yaghini
Nicolas Papernot
FedML
150
110
0
21 Apr 2021
Source-Free Domain Adaptation for Semantic Segmentation
Yuang Liu
Wei Zhang
Jun Wang
80
256
0
30 Mar 2021
Zero-shot Adversarial Quantization
Yuang Liu
Wei Zhang
Jun Wang
MQ
114
79
0
29 Mar 2021
Distilling Object Detectors via Decoupled Features
Jianyuan Guo
Kai Han
Yunhe Wang
Han Wu
Xinghao Chen
Chunjing Xu
Chang Xu
113
203
0
26 Mar 2021
Training Generative Adversarial Networks in One Stage
Chengchao Shen
Youtan Yin
Xinchao Wang
Xubin Li
Mingli Song
Xiuming Zhang
GAN
111
13
0
28 Feb 2021
Even your Teacher Needs Guidance: Ground-Truth Targets Dampen Regularization Imposed by Self-Distillation
Kenneth Borup
L. Andersen
75
15
0
25 Feb 2021
Enhancing Data-Free Adversarial Distillation with Activation Regularization and Virtual Interpolation
Xiaoyang Qu
Jianzong Wang
Jing Xiao
73
14
0
23 Feb 2021
Large-Scale Generative Data-Free Distillation
Liangchen Luo
Mark Sandler
Zi Lin
A. Zhmoginov
Andrew G. Howard
92
43
0
10 Dec 2020
Progressive Network Grafting for Few-Shot Knowledge Distillation
Chengchao Shen
Xinchao Wang
Youtan Yin
Mingli Song
Sihui Luo
Xiuming Zhang
91
49
0
09 Dec 2020
Data-Free Model Extraction
Jean-Baptiste Truong
Pratyush Maini
R. Walls
Nicolas Papernot
MIACV
95
189
0
30 Nov 2020
MixMix: All You Need for Data-Free Compression Are Feature and Data Mixing
Yuhang Li
Feng Zhu
Ruihao Gong
Mingzhu Shen
Xin Dong
F. Yu
Shaoqing Lu
Shi Gu
MQ
104
40
0
19 Nov 2020
Towards Accurate Quantization and Pruning via Data-free Knowledge Transfer
Chen Zhu
Zheng Xu
Ali Shafahi
Manli Shu
Amin Ghiasi
Tom Goldstein
MQ
61
3
0
14 Oct 2020
MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation
Sanjay Kariyappa
A. Prakash
Moinuddin K. Qureshi
AAML
77
153
0
06 May 2020
Distill, Adapt, Distill: Training Small, In-Domain Models for Neural Machine Translation
Mitchell A. Gordon
Kevin Duh
CLL
VLM
89
13
0
05 Mar 2020
Adversarial Discriminative Domain Adaptation
Eric Tzeng
Judy Hoffman
Kate Saenko
Trevor Darrell
GAN
OOD
306
4,688
0
17 Feb 2017
1