ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.08795
  4. Cited By
Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion

Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion

18 December 2019
Hongxu Yin
Pavlo Molchanov
Zhizhong Li
J. Álvarez
Arun Mallya
Derek Hoiem
N. Jha
Jan Kautz
ArXivPDFHTML

Papers citing "Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion"

50 / 82 papers shown
Title
Simple Semi-supervised Knowledge Distillation from Vision-Language Models via $\mathbf{\texttt{D}}$ual-$\mathbf{\texttt{H}}$ead $\mathbf{\texttt{O}}$ptimization
Simple Semi-supervised Knowledge Distillation from Vision-Language Models via D\mathbf{\texttt{D}}Dual-H\mathbf{\texttt{H}}Head O\mathbf{\texttt{O}}Optimization
Seongjae Kang
Dong Bok Lee
Hyungjoon Jang
Sung Ju Hwang
VLM
57
0
0
12 May 2025
Corrected with the Latest Version: Make Robust Asynchronous Federated Learning Possible
Corrected with the Latest Version: Make Robust Asynchronous Federated Learning Possible
Chaoyi Lu
Yiding Sun
Pengbo Li
Zhichuan Yang
FedML
34
0
0
05 Apr 2025
Forget the Data and Fine-Tuning! Just Fold the Network to Compress
Forget the Data and Fine-Tuning! Just Fold the Network to Compress
Dong Wang
Haris Šikić
Lothar Thiele
O. Saukh
48
0
0
17 Feb 2025
CENSOR: Defense Against Gradient Inversion via Orthogonal Subspace Bayesian Sampling
Kaiyuan Zhang
Siyuan Cheng
Guangyu Shen
Bruno Ribeiro
Shengwei An
Pin-Yu Chen
X. Zhang
Ninghui Li
92
1
0
28 Jan 2025
Semantics Prompting Data-Free Quantization for Low-Bit Vision Transformers
Semantics Prompting Data-Free Quantization for Low-Bit Vision Transformers
Yunshan Zhong
Yuyao Zhou
Yuxin Zhang
Shen Li
Yong Li
Fei Chao
Zhanpeng Zeng
Rongrong Ji
MQ
94
0
0
31 Dec 2024
Multi-Level Optimal Transport for Universal Cross-Tokenizer Knowledge Distillation on Language Models
Multi-Level Optimal Transport for Universal Cross-Tokenizer Knowledge Distillation on Language Models
Xiao Cui
Mo Zhu
Yulei Qin
Liang Xie
Wengang Zhou
H. Li
81
4
0
19 Dec 2024
One VLM to Keep it Learning: Generation and Balancing for Data-free Continual Visual Question Answering
One VLM to Keep it Learning: Generation and Balancing for Data-free Continual Visual Question Answering
Deepayan Das
Davide Talon
Massimiliano Mancini
Yiming Wang
Elisa Ricci
39
0
0
04 Nov 2024
Data Generation for Hardware-Friendly Post-Training Quantization
Data Generation for Hardware-Friendly Post-Training Quantization
Lior Dikstein
Ariel Lapid
Arnon Netzer
H. Habi
MQ
136
0
0
29 Oct 2024
A Unified Solution to Diverse Heterogeneities in One-shot Federated Learning
A Unified Solution to Diverse Heterogeneities in One-shot Federated Learning
Jun Bai
Yiliao Song
Di Wu
Atul Sajjanhar
Yong Xiang
Wei Zhou
Xiaohui Tao
Yan Li
Y. Li
FedML
53
0
0
28 Oct 2024
Few-Shot Class-Incremental Learning with Non-IID Decentralized Data
Few-Shot Class-Incremental Learning with Non-IID Decentralized Data
Cuiwei Liu
Siang Xu
Huaijun Qiu
Jing Zhang
Zhi Liu
Liang Zhao
CLL
32
0
0
18 Sep 2024
DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture
DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture
Qianlong Xiang
Miao Zhang
Yuzhang Shang
Jianlong Wu
Yan Yan
Liqiang Nie
DiffM
60
10
0
05 Sep 2024
MimiQ: Low-Bit Data-Free Quantization of Vision Transformers with Encouraging Inter-Head Attention Similarity
MimiQ: Low-Bit Data-Free Quantization of Vision Transformers with Encouraging Inter-Head Attention Similarity
Kanghyun Choi
Hyeyoon Lee
Dain Kwon
Sunjong Park
Kyuyeun Kim
Noseong Park
Jinho Lee
Jinho Lee
MQ
40
1
0
29 Jul 2024
Distilling Vision-Language Foundation Models: A Data-Free Approach via
  Prompt Diversification
Distilling Vision-Language Foundation Models: A Data-Free Approach via Prompt Diversification
Yunyi Xuan
Weijie Chen
Shicai Yang
Di Xie
Luojun Lin
Yueting Zhuang
VLM
29
4
0
21 Jul 2024
A Label is Worth a Thousand Images in Dataset Distillation
A Label is Worth a Thousand Images in Dataset Distillation
Tian Qin
Zhiwei Deng
David Alvarez-Melis
DD
86
10
0
15 Jun 2024
Data Reconstruction: When You See It and When You Don't
Data Reconstruction: When You See It and When You Don't
Edith Cohen
Haim Kaplan
Yishay Mansour
Shay Moran
Kobbi Nissim
Uri Stemmer
Eliad Tsfadia
AAML
42
2
0
24 May 2024
FedTAD: Topology-aware Data-free Knowledge Distillation for Subgraph
  Federated Learning
FedTAD: Topology-aware Data-free Knowledge Distillation for Subgraph Federated Learning
Yinlin Zhu
Xunkai Li
Zhengyu Wu
Di Wu
Miao Hu
Ronghua Li
FedML
29
5
0
22 Apr 2024
Data-free Knowledge Distillation for Fine-grained Visual Categorization
Data-free Knowledge Distillation for Fine-grained Visual Categorization
Renrong Shao
Wei Zhang
Jianhua Yin
Jun Wang
31
2
0
18 Apr 2024
Enhancing Consistency and Mitigating Bias: A Data Replay Approach for Incremental Learning
Enhancing Consistency and Mitigating Bias: A Data Replay Approach for Incremental Learning
Chenyang Wang
Junjun Jiang
Xingyu Hu
Xianming Liu
Xiangyang Ji
37
2
0
12 Jan 2024
Continual Learning through Networks Splitting and Merging with
  Dreaming-Meta-Weighted Model Fusion
Continual Learning through Networks Splitting and Merging with Dreaming-Meta-Weighted Model Fusion
Yi Sun
Xin Xu
Jian Li
Guanglei Xie
Yifei Shi
Qiang Fang
CLL
MoMe
24
1
0
12 Dec 2023
Towards Few-Call Model Stealing via Active Self-Paced Knowledge Distillation and Diffusion-Based Image Generation
Towards Few-Call Model Stealing via Active Self-Paced Knowledge Distillation and Diffusion-Based Image Generation
Vlad Hondru
Radu Tudor Ionescu
DiffM
44
1
0
29 Sep 2023
Feature Matching Data Synthesis for Non-IID Federated Learning
Feature Matching Data Synthesis for Non-IID Federated Learning
Zijian Li
Yuchang Sun
Jiawei Shao
Yuyi Mao
Jessie Hui Wang
Jun Zhang
26
19
0
09 Aug 2023
Neural Collapse Terminus: A Unified Solution for Class Incremental
  Learning and Its Variants
Neural Collapse Terminus: A Unified Solution for Class Incremental Learning and Its Variants
Yibo Yang
Haobo Yuan
Xiangtai Li
Jianlong Wu
Lefei Zhang
Zhouchen Lin
Philip H. S. Torr
Dacheng Tao
Bernard Ghanem
CLL
31
8
0
03 Aug 2023
Sampling to Distill: Knowledge Transfer from Open-World Data
Sampling to Distill: Knowledge Transfer from Open-World Data
Yuzheng Wang
Zhaoyu Chen
Jie M. Zhang
Dingkang Yang
Zuhao Ge
Yang Liu
Siao Liu
Yunquan Sun
Wenqiang Zhang
Lizhe Qi
26
9
0
31 Jul 2023
f-Divergence Minimization for Sequence-Level Knowledge Distillation
f-Divergence Minimization for Sequence-Level Knowledge Distillation
Yuqiao Wen
Zichao Li
Wenyu Du
Lili Mou
30
53
0
27 Jul 2023
Deconstructing Data Reconstruction: Multiclass, Weight Decay and General
  Losses
Deconstructing Data Reconstruction: Multiclass, Weight Decay and General Losses
G. Buzaglo
Niv Haim
Gilad Yehudai
Gal Vardi
Yakir Oz
Yaniv Nikankin
Michal Irani
26
10
0
04 Jul 2023
DHBE: Data-free Holistic Backdoor Erasing in Deep Neural Networks via
  Restricted Adversarial Distillation
DHBE: Data-free Holistic Backdoor Erasing in Deep Neural Networks via Restricted Adversarial Distillation
Zhicong Yan
Shenghong Li
Ruijie Zhao
Yuan Tian
Yuanyuan Zhao
AAML
36
11
0
13 Jun 2023
LLM-QAT: Data-Free Quantization Aware Training for Large Language Models
LLM-QAT: Data-Free Quantization Aware Training for Large Language Models
Zechun Liu
Barlas Oğuz
Changsheng Zhao
Ernie Chang
Pierre Stock
Yashar Mehdad
Yangyang Shi
Raghuraman Krishnamoorthi
Vikas Chandra
MQ
46
187
0
29 May 2023
Is Synthetic Data From Diffusion Models Ready for Knowledge
  Distillation?
Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation?
Zheng Li
Yuxuan Li
Penghai Zhao
Renjie Song
Xiang Li
Jian Yang
29
19
0
22 May 2023
Architecture, Dataset and Model-Scale Agnostic Data-free Meta-Learning
Architecture, Dataset and Model-Scale Agnostic Data-free Meta-Learning
Zixuan Hu
Li Shen
Zhenyi Wang
Tongliang Liu
Chun Yuan
Dacheng Tao
47
4
0
20 Mar 2023
A Comprehensive Survey on Source-free Domain Adaptation
A Comprehensive Survey on Source-free Domain Adaptation
Zhiqi Yu
Jingjing Li
Zhekai Du
Lei Zhu
H. Shen
TTA
21
94
0
23 Feb 2023
CodeLMSec Benchmark: Systematically Evaluating and Finding Security
  Vulnerabilities in Black-Box Code Language Models
CodeLMSec Benchmark: Systematically Evaluating and Finding Security Vulnerabilities in Black-Box Code Language Models
Hossein Hajipour
Keno Hassler
Thorsten Holz
Lea Schonherr
Mario Fritz
ELM
40
19
0
08 Feb 2023
Neural Collapse Inspired Feature-Classifier Alignment for Few-Shot Class
  Incremental Learning
Neural Collapse Inspired Feature-Classifier Alignment for Few-Shot Class Incremental Learning
Yibo Yang
Haobo Yuan
Xiangtai Li
Zhouchen Lin
Philip H. S. Torr
Dacheng Tao
CLL
32
96
0
06 Feb 2023
Task-Adaptive Saliency Guidance for Exemplar-free Class Incremental
  Learning
Task-Adaptive Saliency Guidance for Exemplar-free Class Incremental Learning
Xialei Liu
Jiang-Tian Zhai
Andrew D. Bagdanov
Ke Li
Mingg-Ming Cheng
CLL
21
4
0
16 Dec 2022
Video Test-Time Adaptation for Action Recognition
Video Test-Time Adaptation for Action Recognition
Wei Lin
M. Jehanzeb Mirza
Mateusz Koziñski
Horst Possegger
Hilde Kuehne
Horst Bischof
TTA
37
31
0
24 Nov 2022
A Survey on Computer Vision based Human Analysis in the COVID-19 Era
A Survey on Computer Vision based Human Analysis in the COVID-19 Era
Fevziye Irem Eyiokur
Alperen Kantarci
M. Erakin
Naser Damer
Ferda Ofli
...
Janez Križaj
A. Salah
Alexander Waibel
Vitomir Štruc
H. K. Ekenel
32
20
0
07 Nov 2022
Decompiling x86 Deep Neural Network Executables
Decompiling x86 Deep Neural Network Executables
Zhibo Liu
Yuanyuan Yuan
Shuai Wang
Xiaofei Xie
L. Ma
AAML
39
13
0
03 Oct 2022
Momentum Adversarial Distillation: Handling Large Distribution Shifts in
  Data-Free Knowledge Distillation
Momentum Adversarial Distillation: Handling Large Distribution Shifts in Data-Free Knowledge Distillation
Kien Do
Hung Le
D. Nguyen
Dang Nguyen
Haripriya Harikumar
T. Tran
Santu Rana
Svetha Venkatesh
18
32
0
21 Sep 2022
PSAQ-ViT V2: Towards Accurate and General Data-Free Quantization for
  Vision Transformers
PSAQ-ViT V2: Towards Accurate and General Data-Free Quantization for Vision Transformers
Zhikai Li
Mengjuan Chen
Junrui Xiao
Qingyi Gu
ViT
MQ
43
33
0
13 Sep 2022
Cocktail Party Attack: Breaking Aggregation-Based Privacy in Federated
  Learning using Independent Component Analysis
Cocktail Party Attack: Breaking Aggregation-Based Privacy in Federated Learning using Independent Component Analysis
Sanjay Kariyappa
Chuan Guo
Kiwan Maeng
Wenjie Xiong
G. E. Suh
Moinuddin K. Qureshi
Hsien-Hsin S. Lee
FedML
13
29
0
12 Sep 2022
FS-BAN: Born-Again Networks for Domain Generalization Few-Shot
  Classification
FS-BAN: Born-Again Networks for Domain Generalization Few-Shot Classification
Yunqing Zhao
Ngai-man Cheung
BDL
21
12
0
23 Aug 2022
RAIN: RegulArization on Input and Network for Black-Box Domain
  Adaptation
RAIN: RegulArization on Input and Network for Black-Box Domain Adaptation
Qucheng Peng
Zhengming Ding
Lingjuan Lyu
Lichao Sun
Chen Chen
OOD
MLAU
16
18
0
22 Aug 2022
Mixed-Precision Neural Networks: A Survey
Mixed-Precision Neural Networks: A Survey
M. Rakka
M. Fouda
Pramod P. Khargonekar
Fadi J. Kurdahi
MQ
18
11
0
11 Aug 2022
Few-Shot Class-Incremental Learning via Entropy-Regularized Data-Free
  Replay
Few-Shot Class-Incremental Learning via Entropy-Regularized Data-Free Replay
Huan Liu
Li Gu
Zhixiang Chi
Yang Wang
Yuanhao Yu
Jun Chen
Jingshan Tang
28
81
0
22 Jul 2022
Generative Domain Adaptation for Face Anti-Spoofing
Generative Domain Adaptation for Face Anti-Spoofing
Qianyu Zhou
Ke-Yue Zhang
Taiping Yao
Ran Yi
Kekai Sheng
Shouhong Ding
Lizhuang Ma
CVBM
32
48
0
20 Jul 2022
Reconstructing Training Data from Trained Neural Networks
Reconstructing Training Data from Trained Neural Networks
Niv Haim
Gal Vardi
Gilad Yehudai
Ohad Shamir
Michal Irani
40
132
0
15 Jun 2022
Few-Shot Unlearning by Model Inversion
Few-Shot Unlearning by Model Inversion
Youngsik Yoon
Jinhwan Nam
Hyojeong Yun
Jaeho Lee
Dongwoo Kim
Jungseul Ok
MU
20
17
0
31 May 2022
IDEAL: Query-Efficient Data-Free Learning from Black-box Models
IDEAL: Query-Efficient Data-Free Learning from Black-box Models
Jie M. Zhang
Chen Chen
Lingjuan Lyu
55
14
0
23 May 2022
Prompting to Distill: Boosting Data-Free Knowledge Distillation via
  Reinforced Prompt
Prompting to Distill: Boosting Data-Free Knowledge Distillation via Reinforced Prompt
Xinyin Ma
Xinchao Wang
Gongfan Fang
Yongliang Shen
Weiming Lu
16
11
0
16 May 2022
DearKD: Data-Efficient Early Knowledge Distillation for Vision
  Transformers
DearKD: Data-Efficient Early Knowledge Distillation for Vision Transformers
Xianing Chen
Qiong Cao
Yujie Zhong
Jing Zhang
Shenghua Gao
Dacheng Tao
ViT
32
76
0
27 Apr 2022
A Closer Look at Rehearsal-Free Continual Learning
A Closer Look at Rehearsal-Free Continual Learning
James Smith
Junjiao Tian
Shaunak Halbe
Yen-Chang Hsu
Z. Kira
VLM
CLL
18
58
0
31 Mar 2022
12
Next