ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.01775
  4. Cited By
Feature-map-level Online Adversarial Knowledge Distillation
v1v2v3 (latest)

Feature-map-level Online Adversarial Knowledge Distillation

International Conference on Machine Learning (ICML), 2020
5 February 2020
Inseop Chung
Seonguk Park
Jangho Kim
Nojun Kwak
    GAN
ArXiv (abs)PDFHTML

Papers citing "Feature-map-level Online Adversarial Knowledge Distillation"

50 / 63 papers shown
Title
The Role of Teacher Calibration in Knowledge Distillation
The Role of Teacher Calibration in Knowledge DistillationIEEE Access (IEEE Access), 2025
Suyoung Kim
Seonguk Park
Junhoo Lee
Nojun Kwak
48
0
0
27 Aug 2025
Asymmetric Decision-Making in Online Knowledge Distillation:Unifying Consensus and Divergence
Zhaowei Chen
Borui Zhao
Yuchen Ge
Yuhao Chen
Renjie Song
Jiajun Liang
191
0
0
09 Mar 2025
CNN-Transformer Rectified Collaborative Learning for Medical Image
  Segmentation
CNN-Transformer Rectified Collaborative Learning for Medical Image Segmentation
Lanhu Wu
Miao Zhang
Yongri Piao
Zhenyan Yao
Weibing Sun
Feng Tian
Huchuan Lu
ViTMedIm
167
4
0
25 Aug 2024
PRG: Prompt-Based Distillation Without Annotation via Proxy Relational
  Graph
PRG: Prompt-Based Distillation Without Annotation via Proxy Relational Graph
Yijin Xu
Jialun Liu
Hualiang Wei
Wenhui Li
241
0
0
22 Aug 2024
Vision-Based Detection of Uncooperative Targets and Components on Small
  Satellites
Vision-Based Detection of Uncooperative Targets and Components on Small Satellites
Hannah Grauer
E. Lupu
Connor T. Lee
Soon-Jo Chung
Darren Rowen
Benjamen P. Bycroft
Phaedrus Leeds
John Brader
127
1
0
22 Aug 2024
CrowdTransfer: Enabling Crowd Knowledge Transfer in AIoT Community
CrowdTransfer: Enabling Crowd Knowledge Transfer in AIoT Community
Yan Liu
Bin Guo
Nuo Li
Yasan Ding
Zhouyangzi Zhang
Zhiwen Yu
291
2
0
09 Jul 2024
SelfReg-UNet: Self-Regularized UNet for Medical Image Segmentation
SelfReg-UNet: Self-Regularized UNet for Medical Image Segmentation
Wenhui Zhu
Xiwen Chen
Peijie Qiu
Mohammad Farazi
Aristeidis Sotiras
Abolfazl Razi
Yalin Wang
SSegSSL
165
28
0
21 Jun 2024
Adaptive Teaching with Shared Classifier for Knowledge Distillation
Adaptive Teaching with Shared Classifier for Knowledge Distillation
Jaeyeon Jang
Young-Ik Kim
Jisu Lim
Hyeonseong Lee
174
0
0
12 Jun 2024
From Algorithm to Hardware: A Survey on Efficient and Safe Deployment of
  Deep Neural Networks
From Algorithm to Hardware: A Survey on Efficient and Safe Deployment of Deep Neural NetworksIEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2024
Xue Geng
Zhe Wang
Chunyun Chen
Qing Xu
Kaixin Xu
...
Zhenghua Chen
M. Aly
Jie Lin
Ruibing Jin
Xiaoli Li
241
7
0
09 May 2024
A Comprehensive Review of Knowledge Distillation in Computer Vision
A Comprehensive Review of Knowledge Distillation in Computer Vision
Sheikh Musa Kaleem
Tufail Rouf
Gousia Habib
Tausifa Jan Saleem
Brejesh Lall
VLM
228
17
0
01 Apr 2024
LNPT: Label-free Network Pruning and Training
LNPT: Label-free Network Pruning and TrainingIEEE International Joint Conference on Neural Network (IJCNN), 2024
Jinying Xiao
Ping Li
Zhe Tang
Jie Nie
128
3
0
19 Mar 2024
Weakly Supervised Monocular 3D Detection with a Single-View Image
Weakly Supervised Monocular 3D Detection with a Single-View Image
Xue-Qiu Jiang
Sheng Jin
Lewei Lu
Xiaoqin Zhang
Shijian Lu
157
9
0
29 Feb 2024
Distillation Enhanced Time Series Forecasting Network with Momentum
  Contrastive Learning
Distillation Enhanced Time Series Forecasting Network with Momentum Contrastive Learning
Haozhi Gao
Qianqian Ren
Jinbao Li
AI4TS
258
8
0
31 Jan 2024
FerKD: Surgical Label Adaptation for Efficient Distillation
FerKD: Surgical Label Adaptation for Efficient DistillationIEEE International Conference on Computer Vision (ICCV), 2023
Zhiqiang Shen
241
4
0
29 Dec 2023
Multi-teacher Distillation for Multilingual Spelling Correction
Multi-teacher Distillation for Multilingual Spelling Correction
Jingfen Zhang
Xuan Guo
S. Bodapati
Christopher Potts
KELM
135
3
0
20 Nov 2023
A Transformer-Based Model With Self-Distillation for Multimodal Emotion
  Recognition in Conversations
A Transformer-Based Model With Self-Distillation for Multimodal Emotion Recognition in ConversationsIEEE transactions on multimedia (IEEE TMM), 2023
Hui Ma
Jian Wang
Hongfei Lin
Bo Zhang
Yijia Zhang
Bo Xu
139
101
0
31 Oct 2023
AMLNet: Adversarial Mutual Learning Neural Network for
  Non-AutoRegressive Multi-Horizon Time Series Forecasting
AMLNet: Adversarial Mutual Learning Neural Network for Non-AutoRegressive Multi-Horizon Time Series ForecastingInternational Conference on Data Science and Advanced Analytics (DSAA), 2023
Yang Lin
AI4TS
100
0
0
30 Oct 2023
Illumination Distillation Framework for Nighttime Person
  Re-Identification and A New Benchmark
Illumination Distillation Framework for Nighttime Person Re-Identification and A New BenchmarkIEEE transactions on multimedia (IEEE TMM), 2023
Andong Lu
Zhang Zhang
Yan Huang
Yifan Zhang
Chenglong Li
Jin Tang
Liang Wang
148
14
0
31 Aug 2023
Boosting Residual Networks with Group Knowledge
Boosting Residual Networks with Group KnowledgeAAAI Conference on Artificial Intelligence (AAAI), 2023
Shengji Tang
Peng Ye
Baopu Li
Wei Lin
Tao Chen
Tong He
Chong Yu
Wanli Ouyang
157
6
0
26 Aug 2023
Teacher-Student Architecture for Knowledge Distillation: A Survey
Teacher-Student Architecture for Knowledge Distillation: A Survey
Chengming Hu
Xuan Li
Danyang Liu
Haolun Wu
Xi Chen
Ju Wang
Xue Liu
228
38
0
08 Aug 2023
Distilling Universal and Joint Knowledge for Cross-Domain Model
  Compression on Time Series Data
Distilling Universal and Joint Knowledge for Cross-Domain Model Compression on Time Series DataInternational Joint Conference on Artificial Intelligence (IJCAI), 2023
Qing Xu
Ruibing Jin
Xiaoli Li
K. Mao
Zhenghua Chen
99
5
0
07 Jul 2023
Categories of Response-Based, Feature-Based, and Relation-Based
  Knowledge Distillation
Categories of Response-Based, Feature-Based, and Relation-Based Knowledge Distillation
Chuanguang Yang
Xinqiang Yu
Zhulin An
Yongjun Xu
VLMOffRL
472
33
0
19 Jun 2023
Coaching a Teachable Student
Coaching a Teachable StudentComputer Vision and Pattern Recognition (CVPR), 2023
Jimuyang Zhang
Zanming Huang
Eshed Ohn-Bar
219
30
0
16 Jun 2023
NORM: Knowledge Distillation via N-to-One Representation Matching
NORM: Knowledge Distillation via N-to-One Representation MatchingInternational Conference on Learning Representations (ICLR), 2023
Xiaolong Liu
Lujun Li
Chao Li
Anbang Yao
171
89
0
23 May 2023
Performance-aware Approximation of Global Channel Pruning for Multitask
  CNNs
Performance-aware Approximation of Global Channel Pruning for Multitask CNNsIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2023
Hancheng Ye
Bo Zhang
Tao Chen
Jiayuan Fan
Bin Wang
124
33
0
21 Mar 2023
MetaMixer: A Regularization Strategy for Online Knowledge Distillation
MetaMixer: A Regularization Strategy for Online Knowledge Distillation
Maorong Wang
L. Xiao
T. Yamasaki
KELMMoE
111
1
0
14 Mar 2023
BD-KD: Balancing the Divergences for Online Knowledge Distillation
BD-KD: Balancing the Divergences for Online Knowledge Distillation
Ibtihel Amara
N. Sepahvand
B. Meyer
W. Gross
J. Clark
107
3
0
25 Dec 2022
Lightning Fast Video Anomaly Detection via Adversarial Knowledge
  Distillation
Lightning Fast Video Anomaly Detection via Adversarial Knowledge DistillationComputer Vision and Image Understanding (CVIU), 2022
Florinel-Alin Croitoru
Nicolae-Cătălin Ristea
D. Dascalescu
Radu Tudor Ionescu
Fahad Shahbaz Khan
M. Shah
230
4
0
28 Nov 2022
AI-KD: Adversarial learning and Implicit regularization for
  self-Knowledge Distillation
AI-KD: Adversarial learning and Implicit regularization for self-Knowledge DistillationKnowledge-Based Systems (KBS), 2022
Hyungmin Kim
Sungho Suh
Sunghyun Baek
Daehwan Kim
Daun Jeong
Hansang Cho
Junmo Kim
145
9
0
20 Nov 2022
SADT: Combining Sharpness-Aware Minimization with Self-Distillation for
  Improved Model Generalization
SADT: Combining Sharpness-Aware Minimization with Self-Distillation for Improved Model Generalization
Masud An Nur Islam Fahim
Jani Boutellier
192
0
0
01 Nov 2022
Teacher-Student Architecture for Knowledge Learning: A Survey
Teacher-Student Architecture for Knowledge Learning: A Survey
Chengming Hu
Xuan Li
Dan Liu
Xi Chen
Ju Wang
Xue Liu
242
41
0
28 Oct 2022
Bi-directional Weakly Supervised Knowledge Distillation for Whole Slide
  Image Classification
Bi-directional Weakly Supervised Knowledge Distillation for Whole Slide Image ClassificationNeural Information Processing Systems (NeurIPS), 2022
Linhao Qu
Xiao-Zhuo Luo
Manning Wang
Zhijian Song
WSOD
269
73
0
07 Oct 2022
Switchable Online Knowledge Distillation
Switchable Online Knowledge DistillationEuropean Conference on Computer Vision (ECCV), 2022
Biao Qian
Yang Wang
Hongzhi Yin
Richang Hong
Meng Wang
147
41
0
12 Sep 2022
Dense Depth Distillation with Out-of-Distribution Simulated Images
Dense Depth Distillation with Out-of-Distribution Simulated ImagesKnowledge-Based Systems (KBS), 2022
Junjie Hu
Chenyou Fan
Mete Ozay
Hualie Jiang
Tin Lun Lam
283
10
0
26 Aug 2022
Multi-domain Learning for Updating Face Anti-spoofing Models
Multi-domain Learning for Updating Face Anti-spoofing ModelsEuropean Conference on Computer Vision (ECCV), 2022
Xiao Guo
Yaojie Liu
Anil Jain
Xiaoming Liu
CLLCVBM
144
47
0
23 Aug 2022
Online Knowledge Distillation via Mutual Contrastive Learning for Visual
  Recognition
Online Knowledge Distillation via Mutual Contrastive Learning for Visual RecognitionIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022
Chuanguang Yang
Zhulin An
Helong Zhou
Fuzhen Zhuang
Yongjun Xu
Qian Zhang
234
76
0
23 Jul 2022
Compressing Deep Graph Neural Networks via Adversarial Knowledge
  Distillation
Compressing Deep Graph Neural Networks via Adversarial Knowledge DistillationKnowledge Discovery and Data Mining (KDD), 2022
Huarui He
Jie Wang
Zhanqiu Zhang
Feng Wu
142
47
0
24 May 2022
CEKD:Cross Ensemble Knowledge Distillation for Augmented Fine-grained
  Data
CEKD:Cross Ensemble Knowledge Distillation for Augmented Fine-grained Data
Ke Zhang
Jin Fan
Shaoli Huang
Yongliang Qiao
Xiaofeng Yu
Fei-wei Qin
142
3
0
13 Mar 2022
PyNET-QxQ: An Efficient PyNET Variant for QxQ Bayer Pattern Demosaicing
  in CMOS Image Sensors
PyNET-QxQ: An Efficient PyNET Variant for QxQ Bayer Pattern Demosaicing in CMOS Image SensorsIEEE Access (IEEE Access), 2022
Minhyeok Cho
Haechang Lee
Hyunwoo Je
Kijeong Kim
Dongil Ryu
Albert No
150
5
0
08 Mar 2022
Tiny Object Tracking: A Large-scale Dataset and A Baseline
Tiny Object Tracking: A Large-scale Dataset and A BaselineIEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2022
Yabin Zhu
Chenglong Li
Yaoqi Liu
Tianlin Li
Jin Tang
Bin Luo
Zhixiang Huang
ObjD
116
38
0
11 Feb 2022
Distillation from heterogeneous unlabeled collections
Distillation from heterogeneous unlabeled collections
Jean-Michel Begon
Pierre Geurts
68
0
0
17 Jan 2022
Data-Free Knowledge Transfer: A Survey
Data-Free Knowledge Transfer: A Survey
Yuang Liu
Wei Zhang
Jun Wang
Jianyong Wang
203
55
0
31 Dec 2021
A Fast Knowledge Distillation Framework for Visual Recognition
A Fast Knowledge Distillation Framework for Visual Recognition
Zhiqiang Shen
Eric P. Xing
VLM
178
55
0
02 Dec 2021
Improved Knowledge Distillation via Adversarial Collaboration
Improved Knowledge Distillation via Adversarial Collaboration
Zhiqiang Liu
Chengkai Huang
Yanxia Liu
119
4
0
29 Nov 2021
Self-Distilled Self-Supervised Representation Learning
Self-Distilled Self-Supervised Representation Learning
Jiho Jang
Seonhoon Kim
Kiyoon Yoo
Chaerin Kong
Jang-Hyun Kim
Nojun Kwak
SSL
244
16
0
25 Nov 2021
Local-Selective Feature Distillation for Single Image Super-Resolution
Local-Selective Feature Distillation for Single Image Super-Resolution
Seonguk Park
Nojun Kwak
99
10
0
22 Nov 2021
A Survey on Green Deep Learning
A Survey on Green Deep Learning
Jingjing Xu
Wangchunshu Zhou
Zhiyi Fu
Hao Zhou
Lei Li
VLM
383
92
0
08 Nov 2021
Self-Supervised Knowledge Transfer via Loosely Supervised Auxiliary
  Tasks
Self-Supervised Knowledge Transfer via Loosely Supervised Auxiliary TasksIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2021
Seungbum Hong
Jihun Yoon
Junmo Kim
Min-Kook Choi
SSL
55
1
0
25 Oct 2021
MUSE: Feature Self-Distillation with Mutual Information and
  Self-Information
MUSE: Feature Self-Distillation with Mutual Information and Self-InformationBritish Machine Vision Conference (BMVC), 2021
Yunpeng Gong
Ye Yu
Gaurav Mittal
Greg Mori
Mei Chen
SSL
136
2
0
25 Oct 2021
Knowledge Distillation Using Hierarchical Self-Supervision Augmented
  Distribution
Knowledge Distillation Using Hierarchical Self-Supervision Augmented DistributionIEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2021
Chuanguang Yang
Zhulin An
Linhang Cai
Yongjun Xu
208
23
0
07 Sep 2021
12
Next