Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1912.08795
Cited By
Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion
18 December 2019
Hongxu Yin
Pavlo Molchanov
Zhizhong Li
J. Álvarez
Arun Mallya
Derek Hoiem
N. Jha
Jan Kautz
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion"
35 / 85 papers shown
Title
Prompting to Distill: Boosting Data-Free Knowledge Distillation via Reinforced Prompt
Xinyin Ma
Xinchao Wang
Gongfan Fang
Yongliang Shen
Weiming Lu
19
11
0
16 May 2022
DearKD: Data-Efficient Early Knowledge Distillation for Vision Transformers
Xianing Chen
Qiong Cao
Yujie Zhong
Jing Zhang
Shenghua Gao
Dacheng Tao
ViT
32
76
0
27 Apr 2022
A Closer Look at Rehearsal-Free Continual Learning
James Smith
Junjiao Tian
Shaunak Halbe
Yen-Chang Hsu
Z. Kira
VLM
CLL
18
58
0
31 Mar 2022
R-DFCIL: Relation-Guided Representation Learning for Data-Free Class Incremental Learning
Qiankun Gao
Chen Zhao
Bernard Ghanem
Jian Zhang
CLL
20
61
0
24 Mar 2022
The Dark Side: Security Concerns in Machine Learning for EDA
Zhiyao Xie
Jingyu Pan
Chen-Chia Chang
Yiran Chen
6
4
0
20 Mar 2022
Fine-tuning Global Model via Data-Free Knowledge Distillation for Non-IID Federated Learning
Lin Zhang
Li Shen
Liang Ding
Dacheng Tao
Ling-Yu Duan
FedML
28
252
0
17 Mar 2022
Structured Pruning is All You Need for Pruning CNNs at Initialization
Yaohui Cai
Weizhe Hua
Hongzheng Chen
G. E. Suh
Christopher De Sa
Zhiru Zhang
CVBM
39
14
0
04 Mar 2022
Plug-In Inversion: Model-Agnostic Inversion for Vision with Data Augmentations
Amin Ghiasi
Hamid Kazemi
Steven Reich
Chen Zhu
Micah Goldblum
Tom Goldstein
34
15
0
31 Jan 2022
Variational Model Inversion Attacks
Kuan-Chieh Jackson Wang
Yanzhe Fu
Ke Li
Ashish Khisti
R. Zemel
Alireza Makhzani
MIACV
11
95
0
26 Jan 2022
Ex-Model: Continual Learning from a Stream of Trained Models
Antonio Carta
Andrea Cossu
Vincenzo Lomonaco
D. Bacciu
CLL
14
11
0
13 Dec 2021
The Augmented Image Prior: Distilling 1000 Classes by Extrapolating from a Single Image
Yuki M. Asano
Aaqib Saeed
30
7
0
01 Dec 2021
Source-free unsupervised domain adaptation for cross-modality abdominal multi-organ segmentation
Jin Hong
Yudong Zhang
Weitian Chen
OOD
MedIm
27
82
0
24 Nov 2021
IntraQ: Learning Synthetic Images with Intra-Class Heterogeneity for Zero-Shot Network Quantization
Yunshan Zhong
Mingbao Lin
Gongrui Nan
Jianzhuang Liu
Baochang Zhang
Yonghong Tian
Rongrong Ji
MQ
40
71
0
17 Nov 2021
Semantic Host-free Trojan Attack
Haripriya Harikumar
Kien Do
Santu Rana
Sunil R. Gupta
Svetha Venkatesh
12
1
0
26 Oct 2021
When to Prune? A Policy towards Early Structural Pruning
Maying Shen
Pavlo Molchanov
Hongxu Yin
J. Álvarez
VLM
20
52
0
22 Oct 2021
Global Vision Transformer Pruning with Hessian-Aware Saliency
Huanrui Yang
Hongxu Yin
Maying Shen
Pavlo Molchanov
Hai Helen Li
Jan Kautz
ViT
30
38
0
10 Oct 2021
Fine-grained Data Distribution Alignment for Post-Training Quantization
Yunshan Zhong
Mingbao Lin
Mengzhao Chen
Ke Li
Yunhang Shen
Fei Chao
Yongjian Wu
Rongrong Ji
MQ
84
19
0
09 Sep 2021
Memory-Free Generative Replay For Class-Incremental Learning
Xiaomeng Xin
Yiran Zhong
Yunzhong Hou
Jinjun Wang
Liang Zheng
22
8
0
01 Sep 2021
Preventing Catastrophic Forgetting and Distribution Mismatch in Knowledge Distillation via Synthetic Data
Kuluhan Binici
N. Pham
T. Mitra
K. Leman
20
40
0
11 Aug 2021
DeepScale: Online Frame Size Adaptation for Multi-object Tracking on Smart Cameras and Edge Servers
Keivan Nalaie
Renjie Xu
Rong Zheng
VOT
21
9
0
22 Jul 2021
On The Distribution of Penultimate Activations of Classification Networks
Minkyo Seo
Yoonho Lee
Suha Kwak
UQCV
16
4
0
05 Jul 2021
Incremental Deep Neural Network Learning using Classification Confidence Thresholding
Justin Leo
Jugal Kalita
CLL
14
15
0
21 Jun 2021
Always Be Dreaming: A New Approach for Data-Free Class-Incremental Learning
James Smith
Yen-Chang Hsu
John C. Balloch
Yilin Shen
Hongxia Jin
Z. Kira
CLL
46
161
0
17 Jun 2021
Graph-Free Knowledge Distillation for Graph Neural Networks
Xiang Deng
Zhongfei Zhang
28
65
0
16 May 2021
Delving into Data: Effectively Substitute Training for Black-box Attack
Wenxuan Wang
Bangjie Yin
Taiping Yao
Li Zhang
Yanwei Fu
Shouhong Ding
Jilin Li
Feiyue Huang
Xiangyang Xue
AAML
60
63
0
26 Apr 2021
Visualizing Adapted Knowledge in Domain Transfer
Yunzhong Hou
Liang Zheng
111
54
0
20 Apr 2021
See through Gradients: Image Batch Recovery via GradInversion
Hongxu Yin
Arun Mallya
Arash Vahdat
J. Álvarez
Jan Kautz
Pavlo Molchanov
FedML
25
459
0
15 Apr 2021
Diversifying Sample Generation for Accurate Data-Free Quantization
Xiangguo Zhang
Haotong Qin
Yifu Ding
Ruihao Gong
Qing Yan
Renshuai Tao
Yuhang Li
F. Yu
Xianglong Liu
MQ
54
94
0
01 Mar 2021
Enhancing Data-Free Adversarial Distillation with Activation Regularization and Virtual Interpolation
Xiaoyang Qu
Jianzong Wang
Jing Xiao
16
14
0
23 Feb 2021
Robustness and Diversity Seeking Data-Free Knowledge Distillation
Pengchao Han
Jihong Park
Shiqiang Wang
Yejun Liu
15
12
0
07 Nov 2020
Black-Box Ripper: Copying black-box models using generative evolutionary algorithms
Antonio Bărbălău
Adrian Cosma
Radu Tudor Ionescu
Marius Popescu
MIACV
MLAU
19
43
0
21 Oct 2020
TUTOR: Training Neural Networks Using Decision Rules as Model Priors
Shayan Hassantabar
Prerit Terway
N. Jha
28
10
0
12 Oct 2020
Automatic Recall Machines: Internal Replay, Continual Learning and the Brain
Xu Ji
Joao Henriques
Tinne Tuytelaars
Andrea Vedaldi
KELM
17
10
0
22 Jun 2020
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
19
2,835
0
09 Jun 2020
MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation
Sanjay Kariyappa
A. Prakash
Moinuddin K. Qureshi
AAML
21
146
0
06 May 2020
Previous
1
2