ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.08114
  4. Cited By
Zero-Shot Knowledge Distillation in Deep Networks

Zero-Shot Knowledge Distillation in Deep Networks

20 May 2019
Gaurav Kumar Nayak
Konda Reddy Mopuri
Vaisakh Shaj
R. Venkatesh Babu
Anirban Chakraborty
ArXivPDFHTML

Papers citing "Zero-Shot Knowledge Distillation in Deep Networks"

50 / 151 papers shown
Title
Dataset Distillation: A Comprehensive Review
Dataset Distillation: A Comprehensive Review
Ruonan Yu
Songhua Liu
Xinchao Wang
DD
41
121
0
17 Jan 2023
Large Language Models Are Reasoning Teachers
Large Language Models Are Reasoning Teachers
Namgyu Ho
Laura Schmid
Se-Young Yun
ReLM
ELM
LRM
31
316
0
20 Dec 2022
Dataless Knowledge Fusion by Merging Weights of Language Models
Dataless Knowledge Fusion by Merging Weights of Language Models
Xisen Jin
Xiang Ren
Daniel Preotiuc-Pietro
Pengxiang Cheng
FedML
MoMe
21
213
0
19 Dec 2022
Scalable Collaborative Learning via Representation Sharing
Scalable Collaborative Learning via Representation Sharing
Frédéric Berdoz
Abhishek Singh
Martin Jaggi
Ramesh Raskar
FedML
22
3
0
20 Nov 2022
Exploiting Features and Logits in Heterogeneous Federated Learning
Exploiting Features and Logits in Heterogeneous Federated Learning
Yun-Hin Chan
Edith C. H. Ngai
FedML
24
2
0
27 Oct 2022
Investigating Neuron Disturbing in Fusing Heterogeneous Neural Networks
Investigating Neuron Disturbing in Fusing Heterogeneous Neural Networks
Biao Zhang
Shuqin Zhang
FedML
MoMe
24
0
0
24 Oct 2022
A Survey on Heterogeneous Federated Learning
A Survey on Heterogeneous Federated Learning
Dashan Gao
Xin Yao
Qian Yang
FedML
27
58
0
10 Oct 2022
Momentum Adversarial Distillation: Handling Large Distribution Shifts in
  Data-Free Knowledge Distillation
Momentum Adversarial Distillation: Handling Large Distribution Shifts in Data-Free Knowledge Distillation
Kien Do
Hung Le
D. Nguyen
Dang Nguyen
Haripriya Harikumar
T. Tran
Santu Rana
Svetha Venkatesh
18
32
0
21 Sep 2022
Federated Zero-Shot Learning for Visual Recognition
Federated Zero-Shot Learning for Visual Recognition
Zhi Chen
Yadan Luo
Sen Wang
Jingjing Li
Zi Huang
FedML
24
3
0
05 Sep 2022
Dynamic Data-Free Knowledge Distillation by Easy-to-Hard Learning
  Strategy
Dynamic Data-Free Knowledge Distillation by Easy-to-Hard Learning Strategy
Jingru Li
Sheng Zhou
Liangcheng Li
Haishuai Wang
Zhi Yu
Jiajun Bu
21
14
0
29 Aug 2022
Teacher Guided Training: An Efficient Framework for Knowledge Transfer
Teacher Guided Training: An Efficient Framework for Knowledge Transfer
Manzil Zaheer
A. S. Rawat
Seungyeon Kim
Chong You
Himanshu Jain
Andreas Veit
Rob Fergus
Surinder Kumar
VLM
16
2
0
14 Aug 2022
Black-box Few-shot Knowledge Distillation
Black-box Few-shot Knowledge Distillation
Dang Nguyen
Sunil R. Gupta
Kien Do
Svetha Venkatesh
4
17
0
25 Jul 2022
Explaining Neural Networks without Access to Training Data
Explaining Neural Networks without Access to Training Data
Sascha Marton
Stefan Lüdtke
Christian Bartelt
Andrej Tschalzev
Heiner Stuckenschmidt
FAtt
17
3
0
10 Jun 2022
CDFKD-MFS: Collaborative Data-free Knowledge Distillation via
  Multi-level Feature Sharing
CDFKD-MFS: Collaborative Data-free Knowledge Distillation via Multi-level Feature Sharing
Zhiwei Hao
Yong Luo
Zhi Wang
Han Hu
J. An
37
27
0
24 May 2022
Prompting to Distill: Boosting Data-Free Knowledge Distillation via
  Reinforced Prompt
Prompting to Distill: Boosting Data-Free Knowledge Distillation via Reinforced Prompt
Xinyin Ma
Xinchao Wang
Gongfan Fang
Yongliang Shen
Weiming Lu
19
11
0
16 May 2022
Generalized Knowledge Distillation via Relationship Matching
Generalized Knowledge Distillation via Relationship Matching
Han-Jia Ye
Su Lu
De-Chuan Zhan
FedML
22
20
0
04 May 2022
Towards Data-Free Model Stealing in a Hard Label Setting
Towards Data-Free Model Stealing in a Hard Label Setting
Sunandini Sanyal
Sravanti Addepalli
R. Venkatesh Babu
AAML
24
85
0
23 Apr 2022
Fine-tuning Global Model via Data-Free Knowledge Distillation for
  Non-IID Federated Learning
Fine-tuning Global Model via Data-Free Knowledge Distillation for Non-IID Federated Learning
Lin Zhang
Li Shen
Liang Ding
Dacheng Tao
Ling-Yu Duan
FedML
28
252
0
17 Mar 2022
2-speed network ensemble for efficient classification of incremental
  land-use/land-cover satellite image chips
2-speed network ensemble for efficient classification of incremental land-use/land-cover satellite image chips
M. J. Horry
Subrata Chakraborty
B. Pradhan
N. Shukla
Sanjoy Paul
23
1
0
15 Mar 2022
Distillation from heterogeneous unlabeled collections
Distillation from heterogeneous unlabeled collections
Jean-Michel Begon
Pierre Geurts
15
0
0
17 Jan 2022
Robust and Resource-Efficient Data-Free Knowledge Distillation by
  Generative Pseudo Replay
Robust and Resource-Efficient Data-Free Knowledge Distillation by Generative Pseudo Replay
Kuluhan Binici
Shivam Aggarwal
N. Pham
K. Leman
T. Mitra
TTA
22
45
0
09 Jan 2022
Conditional Generative Data-free Knowledge Distillation
Conditional Generative Data-free Knowledge Distillation
Xinyi Yu
Ling Yan
Yang Yang
Libo Zhou
Linlin Ou
20
8
0
31 Dec 2021
Data-Free Knowledge Transfer: A Survey
Data-Free Knowledge Transfer: A Survey
Yuang Liu
Wei Zhang
Jun Wang
Jianyong Wang
27
48
0
31 Dec 2021
Ex-Model: Continual Learning from a Stream of Trained Models
Ex-Model: Continual Learning from a Stream of Trained Models
Antonio Carta
Andrea Cossu
Vincenzo Lomonaco
D. Bacciu
CLL
14
11
0
13 Dec 2021
FedRAD: Federated Robust Adaptive Distillation
FedRAD: Federated Robust Adaptive Distillation
Stefán Páll Sturluson
Samuel Trew
Luis Muñoz-González
Matei Grama
Jonathan Passerat-Palmbach
Daniel Rueckert
A. Alansary
FedML
16
17
0
02 Dec 2021
DeepSteal: Advanced Model Extractions Leveraging Efficient Weight
  Stealing in Memories
DeepSteal: Advanced Model Extractions Leveraging Efficient Weight Stealing in Memories
Adnan Siraj Rakin
Md Hafizul Islam Chowdhuryy
Fan Yao
Deliang Fan
AAML
MIACV
42
110
0
08 Nov 2021
Qimera: Data-free Quantization with Synthetic Boundary Supporting
  Samples
Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples
Kanghyun Choi
Deokki Hong
Noseong Park
Youngsok Kim
Jinho Lee
MQ
16
64
0
04 Nov 2021
Beyond Classification: Knowledge Distillation using Multi-Object
  Impressions
Beyond Classification: Knowledge Distillation using Multi-Object Impressions
Gaurav Kumar Nayak
Monish Keswani
Sharan Seshadri
Anirban Chakraborty
16
2
0
27 Oct 2021
Applications and Techniques for Fast Machine Learning in Science
Applications and Techniques for Fast Machine Learning in Science
A. Deiana
Nhan Tran
Joshua C. Agar
Michaela Blott
G. D. Guglielmo
...
Ashish Sharma
S. Summers
Pietro Vischia
J. Vlimant
Olivia Weng
11
71
0
25 Oct 2021
Towards Data-Free Domain Generalization
Towards Data-Free Domain Generalization
A. Frikha
Haokun Chen
Denis Krompass
Thomas Runkler
Volker Tresp
OOD
41
14
0
09 Oct 2021
Light-weight Deformable Registration using Adversarial Learning with
  Distilling Knowledge
Light-weight Deformable Registration using Adversarial Learning with Distilling Knowledge
M. Tran
Tuong Khanh Long Do
Huy Tran
Erman Tjiputra
Quang-Dieu Tran
Anh Nguyen
MedIm
21
25
0
04 Oct 2021
MINIMAL: Mining Models for Data Free Universal Adversarial Triggers
MINIMAL: Mining Models for Data Free Universal Adversarial Triggers
Swapnil Parekh
Yaman Kumar Singla
Somesh Singh
Changyou Chen
Balaji Krishnamurthy
R. Shah
AAML
14
3
0
25 Sep 2021
Preventing Catastrophic Forgetting and Distribution Mismatch in
  Knowledge Distillation via Synthetic Data
Preventing Catastrophic Forgetting and Distribution Mismatch in Knowledge Distillation via Synthetic Data
Kuluhan Binici
N. Pham
T. Mitra
K. Leman
20
40
0
11 Aug 2021
Representation Consolidation for Training Expert Students
Representation Consolidation for Training Expert Students
Zhizhong Li
Avinash Ravichandran
Charless C. Fowlkes
M. Polito
Rahul Bhotika
Stefano Soatto
16
6
0
16 Jul 2021
LANA: Latency Aware Network Acceleration
LANA: Latency Aware Network Acceleration
Pavlo Molchanov
Jimmy Hall
Hongxu Yin
Jan Kautz
Nicolò Fusi
Arash Vahdat
25
11
0
12 Jul 2021
Confidence Conditioned Knowledge Distillation
Confidence Conditioned Knowledge Distillation
Sourav Mishra
Suresh Sundaram
10
1
0
06 Jul 2021
Adversarial Examples Make Strong Poisons
Adversarial Examples Make Strong Poisons
Liam H. Fowl
Micah Goldblum
Ping Yeh-Chiang
Jonas Geiping
Wojtek Czaja
Tom Goldstein
SILM
23
131
0
21 Jun 2021
Zero-Shot Federated Learning with New Classes for Audio Classification
Zero-Shot Federated Learning with New Classes for Audio Classification
Gautham Krishna Gudur
S. K. Perepu
FedML
8
10
0
18 Jun 2021
Zero-Shot Knowledge Distillation from a Decision-Based Black-Box Model
Zero-Shot Knowledge Distillation from a Decision-Based Black-Box Model
Z. Wang
11
43
0
07 Jun 2021
Not Far Away, Not So Close: Sample Efficient Nearest Neighbour Data
  Augmentation via MiniMax
Not Far Away, Not So Close: Sample Efficient Nearest Neighbour Data Augmentation via MiniMax
Ehsan Kamalloo
Mehdi Rezagholizadeh
Peyman Passban
Ali Ghodsi
AAML
12
17
0
28 May 2021
AutoReCon: Neural Architecture Search-based Reconstruction for Data-free
  Compression
AutoReCon: Neural Architecture Search-based Reconstruction for Data-free Compression
Baozhou Zhu
P. Hofstee
J. Peltenburg
Jinho Lee
Zaid Al-Ars
16
22
0
25 May 2021
Revisiting Knowledge Distillation for Object Detection
Revisiting Knowledge Distillation for Object Detection
Amin Banitalebi-Dehkordi
19
6
0
22 May 2021
Graph-Free Knowledge Distillation for Graph Neural Networks
Graph-Free Knowledge Distillation for Graph Neural Networks
Xiang Deng
Zhongfei Zhang
31
65
0
16 May 2021
Test-Time Adaptation Toward Personalized Speech Enhancement: Zero-Shot
  Learning with Knowledge Distillation
Test-Time Adaptation Toward Personalized Speech Enhancement: Zero-Shot Learning with Knowledge Distillation
Sunwoo Kim
Minje Kim
17
18
0
08 May 2021
Visualizing Adapted Knowledge in Domain Transfer
Visualizing Adapted Knowledge in Domain Transfer
Yunzhong Hou
Liang Zheng
111
54
0
20 Apr 2021
Thief, Beware of What Get You There: Towards Understanding Model
  Extraction Attack
Thief, Beware of What Get You There: Towards Understanding Model Extraction Attack
Xinyi Zhang
Chengfang Fang
Jie Shi
MIACV
MLAU
SILM
32
15
0
13 Apr 2021
Data-Free Knowledge Distillation with Soft Targeted Transfer Set
  Synthesis
Data-Free Knowledge Distillation with Soft Targeted Transfer Set Synthesis
Z. Wang
17
30
0
10 Apr 2021
PrivateSNN: Privacy-Preserving Spiking Neural Networks
PrivateSNN: Privacy-Preserving Spiking Neural Networks
Youngeun Kim
Yeshwanth Venkatesha
Priyadarshini Panda
21
22
0
07 Apr 2021
Source-Free Domain Adaptation for Semantic Segmentation
Source-Free Domain Adaptation for Semantic Segmentation
Yuang Liu
Wei Zhang
Jun Wang
14
249
0
30 Mar 2021
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability
  of the Embedding Layers in NLP Models
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP Models
Wenkai Yang
Lei Li
Zhiyuan Zhang
Xuancheng Ren
Xu Sun
Bin He
SILM
18
146
0
29 Mar 2021
Previous
1234
Next