ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.08114
  4. Cited By
Zero-Shot Knowledge Distillation in Deep Networks

Zero-Shot Knowledge Distillation in Deep Networks

20 May 2019
Gaurav Kumar Nayak
Konda Reddy Mopuri
Vaisakh Shaj
R. Venkatesh Babu
Anirban Chakraborty
ArXivPDFHTML

Papers citing "Zero-Shot Knowledge Distillation in Deep Networks"

50 / 151 papers shown
Title
Simple Semi-supervised Knowledge Distillation from Vision-Language Models via $\mathbf{\texttt{D}}$ual-$\mathbf{\texttt{H}}$ead $\mathbf{\texttt{O}}$ptimization
Simple Semi-supervised Knowledge Distillation from Vision-Language Models via D\mathbf{\texttt{D}}Dual-H\mathbf{\texttt{H}}Head O\mathbf{\texttt{O}}Optimization
Seongjae Kang
Dong Bok Lee
Hyungjoon Jang
Sung Ju Hwang
VLM
57
0
0
12 May 2025
Corrected with the Latest Version: Make Robust Asynchronous Federated Learning Possible
Corrected with the Latest Version: Make Robust Asynchronous Federated Learning Possible
Chaoyi Lu
Yiding Sun
Pengbo Li
Zhichuan Yang
FedML
34
0
0
05 Apr 2025
Toward Efficient Data-Free Unlearning
Toward Efficient Data-Free Unlearning
Chenhao Zhang
Shaofei Shen
Weitong Chen
Miao Xu
MU
69
0
0
18 Dec 2024
Relation-Guided Adversarial Learning for Data-free Knowledge Transfer
Relation-Guided Adversarial Learning for Data-free Knowledge Transfer
Yingping Liang
Ying Fu
67
1
0
16 Dec 2024
Large-Scale Data-Free Knowledge Distillation for ImageNet via
  Multi-Resolution Data Generation
Large-Scale Data-Free Knowledge Distillation for ImageNet via Multi-Resolution Data Generation
Minh-Tuan Tran
Trung Le
Xuan-May Le
Jianfei Cai
Mehrtash Harandi
Dinh Q. Phung
76
2
0
26 Nov 2024
Towards Effective Data-Free Knowledge Distillation via Diverse Diffusion
  Augmentation
Towards Effective Data-Free Knowledge Distillation via Diverse Diffusion Augmentation
Muquan Li
Dongyang Zhang
Tao He
Xiurui Xie
Yuan-Fang Li
Ke Qin
18
1
0
23 Oct 2024
DFDG: Data-Free Dual-Generator Adversarial Distillation for One-Shot
  Federated Learning
DFDG: Data-Free Dual-Generator Adversarial Distillation for One-Shot Federated Learning
Kangyang Luo
Shuai Wang
Y. Fu
Renrong Shao
Xiang Li
Yunshi Lan
Ming Gao
Jinlong Shu
FedML
36
2
0
12 Sep 2024
Data-free Distillation with Degradation-prompt Diffusion for
  Multi-weather Image Restoration
Data-free Distillation with Degradation-prompt Diffusion for Multi-weather Image Restoration
Pei Wang
Xiaotong Luo
Yuan Xie
Yanyun Qu
DiffM
47
1
0
05 Sep 2024
DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture
DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture
Qianlong Xiang
Miao Zhang
Yuzhang Shang
Jianlong Wu
Yan Yan
Liqiang Nie
DiffM
63
10
0
05 Sep 2024
Infrared Domain Adaptation with Zero-Shot Quantization
Infrared Domain Adaptation with Zero-Shot Quantization
Burak Sevsay
Erdem Akagündüz
VLM
MQ
30
1
0
25 Aug 2024
Condensed Sample-Guided Model Inversion for Knowledge Distillation
Condensed Sample-Guided Model Inversion for Knowledge Distillation
Kuluhan Binici
Shivam Aggarwal
Cihan Acar
N. Pham
K. Leman
Gim Hee Lee
Tulika Mitra
46
1
0
25 Aug 2024
Encapsulating Knowledge in One Prompt
Encapsulating Knowledge in One Prompt
Qi Li
Runpeng Yu
Xinchao Wang
VLM
KELM
49
3
0
16 Jul 2024
Small Scale Data-Free Knowledge Distillation
Small Scale Data-Free Knowledge Distillation
He Liu
Yikai Wang
Huaping Liu
Fuchun Sun
Anbang Yao
19
8
0
12 Jun 2024
From Algorithm to Hardware: A Survey on Efficient and Safe Deployment of
  Deep Neural Networks
From Algorithm to Hardware: A Survey on Efficient and Safe Deployment of Deep Neural Networks
Xue Geng
Zhe Wang
Chunyun Chen
Qing Xu
Kaixin Xu
...
Zhenghua Chen
M. Aly
Jie Lin
Min-man Wu
Xiaoli Li
33
1
0
09 May 2024
Zero-Shot Distillation for Image Encoders: How to Make Effective Use of
  Synthetic Data
Zero-Shot Distillation for Image Encoders: How to Make Effective Use of Synthetic Data
Niclas Popp
J. H. Metzen
Matthias Hein
VLM
40
1
0
25 Apr 2024
FedTAD: Topology-aware Data-free Knowledge Distillation for Subgraph
  Federated Learning
FedTAD: Topology-aware Data-free Knowledge Distillation for Subgraph Federated Learning
Yinlin Zhu
Xunkai Li
Zhengyu Wu
Di Wu
Miao Hu
Ronghua Li
FedML
29
5
0
22 Apr 2024
Data-free Knowledge Distillation for Fine-grained Visual Categorization
Data-free Knowledge Distillation for Fine-grained Visual Categorization
Renrong Shao
Wei Zhang
Jianhua Yin
Jun Wang
31
2
0
18 Apr 2024
Efficient Data-Free Model Stealing with Label Diversity
Efficient Data-Free Model Stealing with Label Diversity
Yiyong Liu
Rui Wen
Michael Backes
Yang Zhang
AAML
39
2
0
29 Mar 2024
Training Self-localization Models for Unseen Unfamiliar Places via
  Teacher-to-Student Data-Free Knowledge Transfer
Training Self-localization Models for Unseen Unfamiliar Places via Teacher-to-Student Data-Free Knowledge Transfer
Kenta Tsukahara
Kanji Tanaka
Daiki Iwata
32
2
0
13 Mar 2024
Distilling the Knowledge in Data Pruning
Distilling the Knowledge in Data Pruning
Emanuel Ben-Baruch
Adam Botach
Igor Kviatkovsky
Manoj Aggarwal
Gérard Medioni
38
1
0
12 Mar 2024
Teacher as a Lenient Expert: Teacher-Agnostic Data-Free Knowledge
  Distillation
Teacher as a Lenient Expert: Teacher-Agnostic Data-Free Knowledge Distillation
Hyunjune Shin
Dong-Wan Choi
AAML
27
1
0
18 Feb 2024
Enhancing Consistency and Mitigating Bias: A Data Replay Approach for Incremental Learning
Enhancing Consistency and Mitigating Bias: A Data Replay Approach for Incremental Learning
Chenyang Wang
Junjun Jiang
Xingyu Hu
Xianming Liu
Xiangyang Ji
37
2
0
12 Jan 2024
Lightweight Adaptive Feature De-drifting for Compressed Image
  Classification
Lightweight Adaptive Feature De-drifting for Compressed Image Classification
Long Peng
Yang Cao
Yuejin Sun
Yang Wang
42
16
0
03 Jan 2024
Recursive Distillation for Open-Set Distributed Robot Localization
Recursive Distillation for Open-Set Distributed Robot Localization
Kenta Tsukahara
Kanji Tanaka
19
0
0
26 Dec 2023
Federated Learning via Input-Output Collaborative Distillation
Federated Learning via Input-Output Collaborative Distillation
Xuan Gong
Shanglin Li
Yuxiang Bao
Barry Yao
Yawen Huang
Ziyan Wu
Baochang Zhang
Yefeng Zheng
David Doermann
FedML
18
6
0
22 Dec 2023
Fake It Till Make It: Federated Learning with Consensus-Oriented
  Generation
Fake It Till Make It: Federated Learning with Consensus-Oriented Generation
Rui Ye
Yaxin Du
Zhenyang Ni
Siheng Chen
Yanfeng Wang
FedML
36
5
0
10 Dec 2023
Data-Free Hard-Label Robustness Stealing Attack
Data-Free Hard-Label Robustness Stealing Attack
Xiaojian Yuan
Kejiang Chen
Wen Huang
Jie Zhang
Weiming Zhang
Neng H. Yu
AAML
21
5
0
10 Dec 2023
SlimSAM: 0.1% Data Makes Segment Anything Slim
SlimSAM: 0.1% Data Makes Segment Anything Slim
Zigeng Chen
Gongfan Fang
Xinyin Ma
Xinchao Wang
33
13
0
08 Dec 2023
A Survey on Vulnerability of Federated Learning: A Learning Algorithm
  Perspective
A Survey on Vulnerability of Federated Learning: A Learning Algorithm Perspective
Xianghua Xie
Chen Hu
Hanchi Ren
Jingjing Deng
FedML
AAML
29
19
0
27 Nov 2023
Data-Free Knowledge Distillation Using Adversarially Perturbed OpenGL
  Shader Images
Data-Free Knowledge Distillation Using Adversarially Perturbed OpenGL Shader Images
Logan Frank
Jim Davis
25
1
0
20 Oct 2023
Towards the Fundamental Limits of Knowledge Transfer over Finite Domains
Towards the Fundamental Limits of Knowledge Transfer over Finite Domains
Qingyue Zhao
Banghua Zhu
28
4
0
11 Oct 2023
Robustness-Guided Image Synthesis for Data-Free Quantization
Robustness-Guided Image Synthesis for Data-Free Quantization
Jianhong Bai
Yuchen Yang
Huanpeng Chu
Hualiang Wang
Zuo-Qiang Liu
Ruizhe Chen
Xiaoxuan He
Lianrui Mu
Chengfei Cai
Haoji Hu
DiffM
MQ
26
5
0
05 Oct 2023
NAYER: Noisy Layer Data Generation for Efficient and Effective Data-free
  Knowledge Distillation
NAYER: Noisy Layer Data Generation for Efficient and Effective Data-free Knowledge Distillation
Minh-Tuan Tran
Trung Le
Xuan-May Le
Mehrtash Harandi
Quan Hung Tran
Dinh Q. Phung
10
11
0
30 Sep 2023
Towards Few-Call Model Stealing via Active Self-Paced Knowledge Distillation and Diffusion-Based Image Generation
Towards Few-Call Model Stealing via Active Self-Paced Knowledge Distillation and Diffusion-Based Image Generation
Vlad Hondru
Radu Tudor Ionescu
DiffM
47
1
0
29 Sep 2023
Causal-DFQ: Causality Guided Data-free Network Quantization
Causal-DFQ: Causality Guided Data-free Network Quantization
Yuzhang Shang
Bingxin Xu
Gaowen Liu
Ramana Rao Kompella
Yan Yan
MQ
CML
16
8
0
24 Sep 2023
DFRD: Data-Free Robustness Distillation for Heterogeneous Federated
  Learning
DFRD: Data-Free Robustness Distillation for Heterogeneous Federated Learning
Kangyang Luo
Shuai Wang
Y. Fu
Xiang Li
Yunshi Lan
Minghui Gao
FedML
23
23
0
24 Sep 2023
Self-Training and Multi-Task Learning for Limited Data: Evaluation Study
  on Object Detection
Self-Training and Multi-Task Learning for Limited Data: Evaluation Study on Object Detection
Hoàng-Ân Lê
Minh-Tan Pham
34
2
0
12 Sep 2023
REFT: Resource-Efficient Federated Training Framework for Heterogeneous
  and Resource-Constrained Environments
REFT: Resource-Efficient Federated Training Framework for Heterogeneous and Resource-Constrained Environments
Humaid Ahmed Desai
Amr B. Hilal
Hoda Eldardiry
15
0
0
25 Aug 2023
Semi-Supervised Learning via Weight-aware Distillation under Class
  Distribution Mismatch
Semi-Supervised Learning via Weight-aware Distillation under Class Distribution Mismatch
Pan Du
Suyun Zhao
Zisen Sheng
Cuiping Li
Hong Chen
22
5
0
23 Aug 2023
Internal Cross-layer Gradients for Extending Homogeneity to
  Heterogeneity in Federated Learning
Internal Cross-layer Gradients for Extending Homogeneity to Heterogeneity in Federated Learning
Yun-Hin Chan
Rui Zhou
Running Zhao
Zhihan Jiang
Edith C. H. Ngai
FedML
30
8
0
22 Aug 2023
Categories of Response-Based, Feature-Based, and Relation-Based
  Knowledge Distillation
Categories of Response-Based, Feature-Based, and Relation-Based Knowledge Distillation
Chuanguang Yang
Xinqiang Yu
Zhulin An
Yongjun Xu
VLM
OffRL
86
22
0
19 Jun 2023
Deep Classifier Mimicry without Data Access
Deep Classifier Mimicry without Data Access
Steven Braun
Martin Mundt
Kristian Kersting
DiffM
11
4
0
03 Jun 2023
Federated Domain Generalization: A Survey
Federated Domain Generalization: A Survey
Ying Li
Xingwei Wang
Rongfei Zeng
Praveen Kumar Donta
Ilir Murturi
Min Huang
Schahram Dustdar
OOD
FedML
AI4CE
42
29
0
02 Jun 2023
PaD: Program-aided Distillation Can Teach Small Models Reasoning Better
  than Chain-of-thought Fine-tuning
PaD: Program-aided Distillation Can Teach Small Models Reasoning Better than Chain-of-thought Fine-tuning
Xuekai Zhu
Biqing Qi
Kaiyan Zhang
Xingwei Long
Zhouhan Lin
Bowen Zhou
ALM
LRM
28
19
0
23 May 2023
Self-discipline on multiple channels
Self-discipline on multiple channels
Jiutian Zhao
Liangchen Luo
Hao Wang
19
0
0
27 Apr 2023
FedIN: Federated Intermediate Layers Learning for Model Heterogeneity
FedIN: Federated Intermediate Layers Learning for Model Heterogeneity
Yun-Hin Chan
Zhihan Jiang
Jing Deng
Edith C. H. Ngai
FedML
24
1
0
03 Apr 2023
A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts
A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts
Jian Liang
R. He
Tien-Ping Tan
OOD
VLM
TTA
32
203
0
27 Mar 2023
Learning to Retain while Acquiring: Combating Distribution-Shift in
  Adversarial Data-Free Knowledge Distillation
Learning to Retain while Acquiring: Combating Distribution-Shift in Adversarial Data-Free Knowledge Distillation
Gaurav Patel
Konda Reddy Mopuri
Qiang Qiu
21
28
0
28 Feb 2023
A Comprehensive Survey on Source-free Domain Adaptation
A Comprehensive Survey on Source-free Domain Adaptation
Zhiqi Yu
Jingjing Li
Zhekai Du
Lei Zhu
H. Shen
TTA
21
94
0
23 Feb 2023
Improved knowledge distillation by utilizing backward pass knowledge in
  neural networks
Improved knowledge distillation by utilizing backward pass knowledge in neural networks
A. Jafari
Mehdi Rezagholizadeh
A. Ghodsi
12
1
0
27 Jan 2023
1234
Next