ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.04136
  4. Cited By
Data-Free Network Quantization With Adversarial Knowledge Distillation

Data-Free Network Quantization With Adversarial Knowledge Distillation

8 May 2020
Yoojin Choi
Jihwan P. Choi
Mostafa El-Khamy
Jungwon Lee
    MQ
ArXiv (abs)PDFHTML

Papers citing "Data-Free Network Quantization With Adversarial Knowledge Distillation"

50 / 69 papers shown
Title
Sparse Model Inversion: Efficient Inversion of Vision Transformers for Data-Free Applications
Sparse Model Inversion: Efficient Inversion of Vision Transformers for Data-Free ApplicationsInternational Conference on Machine Learning (ICML), 2025
Zixuan Hu
Yongxian Wei
Li Shen
Zhenyi Wang
Lei Li
Chun Yuan
Dacheng Tao
44
7
0
31 Oct 2025
Conditional Pseudo-Supervised Contrast for Data-Free Knowledge Distillation
Conditional Pseudo-Supervised Contrast for Data-Free Knowledge DistillationPattern Recognition (Pattern Recogn.), 2023
Renrong Shao
Wei Zhang
Ning Yang
64
10
0
03 Oct 2025
CAE-DFKD: Bridging the Transferability Gap in Data-Free Knowledge Distillation
CAE-DFKD: Bridging the Transferability Gap in Data-Free Knowledge DistillationDesign Automation Conference (DAC), 2025
Zherui Zhang
Changwei Wang
Rongtao Xu
Wenyuan Xu
Shibiao Xu
Yu Zhang
Li Guo
233
2
0
30 Apr 2025
Knowledge Distillation: Enhancing Neural Network Compression with Integrated Gradients
Knowledge Distillation: Enhancing Neural Network Compression with Integrated Gradients
David E. Hernandez
J. Chang
Torbjörn E. M. Nordling
190
1
0
17 Mar 2025
Defense Against Model Stealing Based on Account-Aware Distribution Discrepancy
Defense Against Model Stealing Based on Account-Aware Distribution DiscrepancyAAAI Conference on Artificial Intelligence (AAAI), 2025
Jian-Ping Mei
Weibin Zhang
Jie Chen
Xinyu Zhang
Tiantian Zhu
AAML
169
0
0
16 Mar 2025
Toward Efficient Data-Free Unlearning
Toward Efficient Data-Free UnlearningAAAI Conference on Artificial Intelligence (AAAI), 2024
Chenhao Zhang
Shaofei Shen
Weitong Chen
Miao Xu
MU
242
2
0
18 Dec 2024
Relation-Guided Adversarial Learning for Data-free Knowledge Transfer
Relation-Guided Adversarial Learning for Data-free Knowledge TransferInternational Journal of Computer Vision (IJCV), 2024
Yingping Liang
Ying Fu
185
3
0
16 Dec 2024
Large-Scale Data-Free Knowledge Distillation for ImageNet via
  Multi-Resolution Data Generation
Large-Scale Data-Free Knowledge Distillation for ImageNet via Multi-Resolution Data Generation
Minh-Tuan Tran
Trung Le
Xuan-May Le
Jianfei Cai
Mehrtash Harandi
Dinh Q. Phung
227
3
0
26 Nov 2024
Data Generation for Hardware-Friendly Post-Training Quantization
Data Generation for Hardware-Friendly Post-Training QuantizationIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2024
Lior Dikstein
Ariel Lapid
Arnon Netzer
H. Habi
MQ
823
1
0
29 Oct 2024
A method of using RSVD in residual calculation of LowBit GEMM
A method of using RSVD in residual calculation of LowBit GEMM
Hongyaoxing Gu
MQ
145
0
0
27 Sep 2024
DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture
DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any ArchitectureComputer Vision and Pattern Recognition (CVPR), 2024
Qianlong Xiang
Miao Zhang
Yuzhang Shang
Yue Yu
Yan Yan
Liqiang Nie
DiffM
207
12
0
05 Sep 2024
Small Scale Data-Free Knowledge Distillation
Small Scale Data-Free Knowledge Distillation
He Liu
Yikai Wang
Huaping Liu
Fuchun Sun
Anbang Yao
172
18
0
12 Jun 2024
Data-free Knowledge Distillation for Fine-grained Visual Categorization
Data-free Knowledge Distillation for Fine-grained Visual Categorization
Renrong Shao
Wei Zhang
Jianhua Yin
Jun Wang
154
6
0
18 Apr 2024
Efficient Data-Free Model Stealing with Label Diversity
Efficient Data-Free Model Stealing with Label Diversity
Yiyong Liu
Rui Wen
Michael Backes
Yang Zhang
AAML
147
3
0
29 Mar 2024
De-confounded Data-free Knowledge Distillation for Handling Distribution
  Shifts
De-confounded Data-free Knowledge Distillation for Handling Distribution Shifts
Yuzheng Wang
Dingkang Yang
Zhaoyu Chen
Yang Liu
Siao Liu
Wenqiang Zhang
Lihua Zhang
Lizhe Qi
146
15
0
28 Mar 2024
AuG-KD: Anchor-Based Mixup Generation for Out-of-Domain Knowledge
  Distillation
AuG-KD: Anchor-Based Mixup Generation for Out-of-Domain Knowledge DistillationInternational Conference on Learning Representations (ICLR), 2024
Zihao Tang
Zheqi Lv
Shengyu Zhang
Yifan Zhou
Xinyu Duan
Leilei Gan
Kun Kuang
238
4
0
11 Mar 2024
Model Compression Techniques in Biometrics Applications: A Survey
Model Compression Techniques in Biometrics Applications: A Survey
Eduarda Caldeira
Pedro C. Neto
Marco Huber
Naser Damer
Ana F. Sequeira
198
12
0
18 Jan 2024
Direct Distillation between Different Domains
Direct Distillation between Different DomainsEuropean Conference on Computer Vision (ECCV), 2024
Jialiang Tang
Shuo Chen
Gang Niu
Hongyuan Zhu
Qiufeng Wang
Chen Gong
Masashi Sugiyama
198
6
0
12 Jan 2024
Data-Free Knowledge Distillation Using Adversarially Perturbed OpenGL
  Shader Images
Data-Free Knowledge Distillation Using Adversarially Perturbed OpenGL Shader Images
Logan Frank
Jim Davis
177
2
0
20 Oct 2023
Zero-Shot Sharpness-Aware Quantization for Pre-trained Language Models
Zero-Shot Sharpness-Aware Quantization for Pre-trained Language Models
Miaoxi Zhu
Qihuang Zhong
Li Shen
Liang Ding
Juhua Liu
Bo Du
Dacheng Tao
MQVLM
128
2
0
20 Oct 2023
Robustness-Guided Image Synthesis for Data-Free Quantization
Robustness-Guided Image Synthesis for Data-Free QuantizationAAAI Conference on Artificial Intelligence (AAAI), 2023
Jianhong Bai
Yuchen Yang
Huanpeng Chu
Hualiang Wang
Zuo-Qiang Liu
Ruizhe Chen
Xiaoxuan He
Lianrui Mu
Chengfei Cai
Haoji Hu
DiffMMQ
321
6
0
05 Oct 2023
NAYER: Noisy Layer Data Generation for Efficient and Effective Data-free
  Knowledge Distillation
NAYER: Noisy Layer Data Generation for Efficient and Effective Data-free Knowledge DistillationComputer Vision and Pattern Recognition (CVPR), 2023
Minh-Tuan Tran
Trung Le
Xuan-May Le
Mehrtash Harandi
Quan Hung Tran
Dinh Q. Phung
190
23
0
30 Sep 2023
Jumping through Local Minima: Quantization in the Loss Landscape of
  Vision Transformers
Jumping through Local Minima: Quantization in the Loss Landscape of Vision TransformersIEEE International Conference on Computer Vision (ICCV), 2023
N. Frumkin
Dibakar Gope
Diana Marculescu
MQ
221
17
0
21 Aug 2023
Sampling to Distill: Knowledge Transfer from Open-World Data
Sampling to Distill: Knowledge Transfer from Open-World DataACM Multimedia (ACM MM), 2023
Yuzheng Wang
Zhaoyu Chen
Jie M. Zhang
Dingkang Yang
Zuhao Ge
Yang Liu
Siao Liu
Yunquan Sun
Wenqiang Zhang
Lizhe Qi
131
10
0
31 Jul 2023
Distribution Shift Matters for Knowledge Distillation with Webly
  Collected Images
Distribution Shift Matters for Knowledge Distillation with Webly Collected ImagesIEEE International Conference on Computer Vision (ICCV), 2023
Jialiang Tang
Shuo Chen
Gang Niu
Masashi Sugiyama
Chenggui Gong
104
15
0
21 Jul 2023
Customizing Synthetic Data for Data-Free Student Learning
Customizing Synthetic Data for Data-Free Student LearningIEEE International Conference on Multimedia and Expo (ICME), 2023
Shiya Luo
Defang Chen
Can Wang
79
2
0
10 Jul 2023
Data-Free Backbone Fine-Tuning for Pruned Neural Networks
Data-Free Backbone Fine-Tuning for Pruned Neural NetworksEuropean Signal Processing Conference (EUSIPCO), 2023
Adrian Holzbock
Achyut Hegde
Klaus C. J. Dietmayer
Vasileios Belagiannis
98
0
0
22 Jun 2023
Is Synthetic Data From Diffusion Models Ready for Knowledge
  Distillation?
Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation?
Zheng Li
Yuxuan Li
Penghai Zhao
Renjie Song
Xiang Li
Jian Yang
145
24
0
22 May 2023
Model Conversion via Differentially Private Data-Free Distillation
Model Conversion via Differentially Private Data-Free DistillationInternational Joint Conference on Artificial Intelligence (IJCAI), 2023
Bochao Liu
Pengju Wang
Shikun Li
Dan Zeng
Shiming Ge
FedML
123
5
0
25 Apr 2023
A Survey on Approximate Edge AI for Energy Efficient Autonomous Driving
  Services
A Survey on Approximate Edge AI for Energy Efficient Autonomous Driving ServicesIEEE Communications Surveys and Tutorials (COMST), 2023
Dewant Katare
Diego Perino
J. Nurmi
M. Warnier
Marijn Janssen
Aaron Yi Ding
230
56
0
13 Apr 2023
Out of Thin Air: Exploring Data-Free Adversarial Robustness Distillation
Out of Thin Air: Exploring Data-Free Adversarial Robustness DistillationAAAI Conference on Artificial Intelligence (AAAI), 2023
Yuzheng Wang
Zhaoyu Chen
Dingkang Yang
Pinxue Guo
Kaixun Jiang
Wenqiang Zhang
Lizhe Qi
AAML
132
9
0
21 Mar 2023
Data-Free Sketch-Based Image Retrieval
Data-Free Sketch-Based Image RetrievalComputer Vision and Pattern Recognition (CVPR), 2023
Abhra Chaudhuri
A. Bhunia
Yi-Zhe Song
Anjan Dutta
160
12
0
14 Mar 2023
Learning to Retain while Acquiring: Combating Distribution-Shift in
  Adversarial Data-Free Knowledge Distillation
Learning to Retain while Acquiring: Combating Distribution-Shift in Adversarial Data-Free Knowledge DistillationComputer Vision and Pattern Recognition (CVPR), 2023
Gaurav Patel
Konda Reddy Mopuri
Qiang Qiu
149
37
0
28 Feb 2023
Explicit and Implicit Knowledge Distillation via Unlabeled Data
Explicit and Implicit Knowledge Distillation via Unlabeled DataIEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2023
Yuzheng Wang
Zuhao Ge
Zhaoyu Chen
Xiangjian Liu
Chuang Ma
Yunquan Sun
Lizhe Qi
166
11
0
17 Feb 2023
BOMP-NAS: Bayesian Optimization Mixed Precision NAS
BOMP-NAS: Bayesian Optimization Mixed Precision NASDesign, Automation and Test in Europe (DATE), 2023
David van Son
F. D. Putter
Sebastian Vogel
Henk Corporaal
MQ
109
5
0
27 Jan 2023
AI-KD: Adversarial learning and Implicit regularization for
  self-Knowledge Distillation
AI-KD: Adversarial learning and Implicit regularization for self-Knowledge DistillationKnowledge-Based Systems (KBS), 2022
Hyungmin Kim
Sungho Suh
Sunghyun Baek
Daehwan Kim
Daun Jeong
Hansang Cho
Junmo Kim
115
9
0
20 Nov 2022
CPT-V: A Contrastive Approach to Post-Training Quantization of Vision
  Transformers
CPT-V: A Contrastive Approach to Post-Training Quantization of Vision Transformers
N. Frumkin
Dibakar Gope
Diana Marculescu
ViTMQ
238
1
0
17 Nov 2022
Long-Range Zero-Shot Generative Deep Network Quantization
Long-Range Zero-Shot Generative Deep Network QuantizationNeural Networks (NN), 2022
Yan Luo
Yangcheng Gao
Zhao Zhang
Haijun Zhang
Mingliang Xu
Meng Wang
MQ
184
10
0
13 Nov 2022
Zero-Shot Learning of a Conditional Generative Adversarial Network for
  Data-Free Network Quantization
Zero-Shot Learning of a Conditional Generative Adversarial Network for Data-Free Network QuantizationInternational Conference on Information Photonics (ICIP), 2021
Yoojin Choi
Mostafa El-Khamy
Jungwon Lee
GAN
113
1
0
26 Oct 2022
Momentum Adversarial Distillation: Handling Large Distribution Shifts in
  Data-Free Knowledge Distillation
Momentum Adversarial Distillation: Handling Large Distribution Shifts in Data-Free Knowledge DistillationNeural Information Processing Systems (NeurIPS), 2022
Kien Do
Hung Le
D. Nguyen
Dang Nguyen
Haripriya Harikumar
T. Tran
Santu Rana
Svetha Venkatesh
116
40
0
21 Sep 2022
Dynamic Data-Free Knowledge Distillation by Easy-to-Hard Learning
  Strategy
Dynamic Data-Free Knowledge Distillation by Easy-to-Hard Learning StrategyInformation Sciences (Inf. Sci.), 2022
Jingru Li
Sheng Zhou
Liangcheng Li
Haishuai Wang
Zhi Yu
Jiajun Bu
206
19
0
29 Aug 2022
QuantFace: Towards Lightweight Face Recognition by Synthetic Data
  Low-bit Quantization
QuantFace: Towards Lightweight Face Recognition by Synthetic Data Low-bit QuantizationInternational Conference on Pattern Recognition (ICPR), 2022
Fadi Boutros
Naser Damer
Arjan Kuijper
CVBMMQ
101
41
0
21 Jun 2022
Optimal Clipping and Magnitude-aware Differentiation for Improved
  Quantization-aware Training
Optimal Clipping and Magnitude-aware Differentiation for Improved Quantization-aware TrainingInternational Conference on Machine Learning (ICML), 2022
Charbel Sakr
Steve Dai
Rangharajan Venkatesan
B. Zimmer
W. Dally
Brucek Khailany
MQ
135
49
0
13 Jun 2022
Few-Shot Unlearning by Model Inversion
Few-Shot Unlearning by Model Inversion
Youngsik Yoon
Jinhwan Nam
Hyojeong Yun
Jaeho Lee
Dongwoo Kim
Jungseul Ok
MU
108
21
0
31 May 2022
CDFKD-MFS: Collaborative Data-free Knowledge Distillation via
  Multi-level Feature Sharing
CDFKD-MFS: Collaborative Data-free Knowledge Distillation via Multi-level Feature SharingIEEE transactions on multimedia (IEEE TMM), 2022
Zhiwei Hao
Yong Luo
Zhi Wang
Han Hu
J. An
135
35
0
24 May 2022
Self-distilled Knowledge Delegator for Exemplar-free Class Incremental
  Learning
Self-distilled Knowledge Delegator for Exemplar-free Class Incremental LearningIEEE International Joint Conference on Neural Network (IJCNN), 2022
Fanfan Ye
Liang Ma
Qiaoyong Zhong
Di Xie
Shiliang Pu
BDLCLL
111
2
0
23 May 2022
It's All In the Teacher: Zero-Shot Quantization Brought Closer to the
  Teacher
It's All In the Teacher: Zero-Shot Quantization Brought Closer to the TeacherComputer Vision and Pattern Recognition (CVPR), 2022
Kanghyun Choi
Hye Yoon Lee
Deokki Hong
Joonsang Yu
Noseong Park
Youngsok Kim
Jinho Lee
MQ
223
35
0
31 Mar 2022
SQuant: On-the-Fly Data-Free Quantization via Diagonal Hessian
  Approximation
SQuant: On-the-Fly Data-Free Quantization via Diagonal Hessian ApproximationInternational Conference on Learning Representations (ICLR), 2022
Cong Guo
Yuxian Qiu
Jingwen Leng
Xiaotian Gao
Chen Zhang
Yunxin Liu
Fan Yang
Yuhao Zhu
Minyi Guo
MQ
180
82
0
14 Feb 2022
Distillation from heterogeneous unlabeled collections
Distillation from heterogeneous unlabeled collections
Jean-Michel Begon
Pierre Geurts
68
0
0
17 Jan 2022
Data-Free Knowledge Transfer: A Survey
Data-Free Knowledge Transfer: A Survey
Yuang Liu
Wei Zhang
Jun Wang
Jianyong Wang
195
54
0
31 Dec 2021
12
Next