Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1904.05835
Cited By
Variational Information Distillation for Knowledge Transfer
11 April 2019
Sungsoo Ahn
S. Hu
Andreas C. Damianou
Neil D. Lawrence
Zhenwen Dai
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Variational Information Distillation for Knowledge Transfer"
50 / 321 papers shown
Title
Self-Distillation for Further Pre-training of Transformers
Seanie Lee
Minki Kang
Juho Lee
Sung Ju Hwang
Kenji Kawaguchi
47
8
0
30 Sep 2022
Momentum Adversarial Distillation: Handling Large Distribution Shifts in Data-Free Knowledge Distillation
Kien Do
Hung Le
D. Nguyen
Dang Nguyen
Haripriya Harikumar
T. Tran
Santu Rana
Svetha Venkatesh
18
32
0
21 Sep 2022
CES-KD: Curriculum-based Expert Selection for Guided Knowledge Distillation
Ibtihel Amara
M. Ziaeefard
B. Meyer
W. Gross
J. Clark
18
4
0
15 Sep 2022
On-Device Domain Generalization
Kaiyang Zhou
Yuanhan Zhang
Yuhang Zang
Jingkang Yang
Chen Change Loy
Ziwei Liu
OOD
33
6
0
15 Sep 2022
SKDCGN: Source-free Knowledge Distillation of Counterfactual Generative Networks using cGANs
Sameer Ambekar
Matteo Tafuro
Ankit Ankit
Diego van der Mast
Mark Alence
C. Athanasiadis
GAN
25
4
0
08 Aug 2022
Class-Incremental Learning with Cross-Space Clustering and Controlled Transfer
Arjun Ashok
K. J. Joseph
V. Balasubramanian
CLL
22
29
0
07 Aug 2022
Meta-Learning based Degradation Representation for Blind Super-Resolution
Bin Xia
Yapeng Tian
Yulun Zhang
Yucheng Hang
Wenming Yang
Q. Liao
SupR
34
17
0
28 Jul 2022
Federated Selective Aggregation for Knowledge Amalgamation
Don Xie
Ruonan Yu
Gongfan Fang
Mingli Song
Zunlei Feng
Xinchao Wang
Li Sun
Mingli Song
FedML
38
3
0
27 Jul 2022
Black-box Few-shot Knowledge Distillation
Dang Nguyen
Sunil R. Gupta
Kien Do
Svetha Venkatesh
11
17
0
25 Jul 2022
Novel Class Discovery without Forgetting
K. J. Joseph
S. Paul
Gaurav Aggarwal
Soma Biswas
Piyush Rai
Kai Han
V. Balasubramanian
CLL
54
43
0
21 Jul 2022
Locality Guidance for Improving Vision Transformers on Tiny Datasets
Kehan Li
Runyi Yu
Zhennan Wang
Li-ming Yuan
Guoli Song
Jie Chen
ViT
32
44
0
20 Jul 2022
Learning Knowledge Representation with Meta Knowledge Distillation for Single Image Super-Resolution
Ziru Xu
Zhenzhong Chen
Shan Liu
SupR
19
3
0
18 Jul 2022
Knowledge Condensation Distillation
Chenxin Li
Mingbao Lin
Zhiyuan Ding
Nie Lin
Yihong Zhuang
Yue Huang
Xinghao Ding
Liujuan Cao
42
28
0
12 Jul 2022
HEAD: HEtero-Assists Distillation for Heterogeneous Object Detectors
Luting Wang
Xiaojie Li
Yue Liao
Jiang
Jianlong Wu
Fei Wang
Chao Qian
Si Liu
25
20
0
12 Jul 2022
Contrastive Deep Supervision
Linfeng Zhang
Xin Chen
Junbo Zhang
Runpei Dong
Kaisheng Ma
75
28
0
12 Jul 2022
PKD: General Distillation Framework for Object Detectors via Pearson Correlation Coefficient
Weihan Cao
Yifan Zhang
Jianfei Gao
Anda Cheng
Ke Cheng
Jian Cheng
29
63
0
05 Jul 2022
Knowledge Distillation of Transformer-based Language Models Revisited
Chengqiang Lu
Jianwei Zhang
Yunfei Chu
Zhengyu Chen
Jingren Zhou
Fei Wu
Haiqing Chen
Hongxia Yang
VLM
27
10
0
29 Jun 2022
Representative Teacher Keys for Knowledge Distillation Model Compression Based on Attention Mechanism for Image Classification
Jun-Teng Yang
Sheng-Che Kao
S. Huang
8
0
0
26 Jun 2022
Mutual Information-guided Knowledge Transfer for Novel Class Discovery
Chuyu Zhang
Chuanyan Hu
Ruijie Xu
Zhitong Gao
Qian He
Xuming He
20
4
0
24 Jun 2022
Revisiting Self-Distillation
M. Pham
Minsu Cho
Ameya Joshi
C. Hegde
23
22
0
17 Jun 2022
Toward Student-Oriented Teacher Network Training For Knowledge Distillation
Chengyu Dong
Liyuan Liu
Jingbo Shang
46
6
0
14 Jun 2022
MISSU: 3D Medical Image Segmentation via Self-distilling TransUNet
Nan Wang
Shaohui Lin
Xiaoxiao Li
Ke Li
Yunhang Shen
Yue Gao
Lizhuang Ma
ViT
MedIm
49
33
0
02 Jun 2022
ORC: Network Group-based Knowledge Distillation using Online Role Change
Jun-woo Choi
Hyeon Cho
Seockhwa Jeong
Wonjun Hwang
19
3
0
01 Jun 2022
Parameter-Efficient and Student-Friendly Knowledge Distillation
Jun Rao
Xv Meng
Liang Ding
Shuhan Qi
Dacheng Tao
37
46
0
28 May 2022
Region-aware Knowledge Distillation for Efficient Image-to-Image Translation
Linfeng Zhang
Xin Chen
Runpei Dong
Kaisheng Ma
VLM
43
10
0
25 May 2022
Knowledge Distillation from A Stronger Teacher
Tao Huang
Shan You
Fei Wang
Chao Qian
Chang Xu
30
237
0
21 May 2022
[Re] Distilling Knowledge via Knowledge Review
Apoorva Verma
Pranjal Gulati
Sarthak Gupta
VLM
24
0
0
18 May 2022
Knowledge Distillation Meets Open-Set Semi-Supervised Learning
Jing Yang
Xiatian Zhu
Adrian Bulat
Brais Martínez
Georgios Tzimiropoulos
31
8
0
13 May 2022
Spot-adaptive Knowledge Distillation
Mingli Song
Ying Chen
Jingwen Ye
Mingli Song
25
72
0
05 May 2022
Attention-based Knowledge Distillation in Multi-attention Tasks: The Impact of a DCT-driven Loss
Alejandro López-Cifuentes
Marcos Escudero-Viñolo
Jesús Bescós
Juan C. Sanmiguel
23
1
0
04 May 2022
Generalized Knowledge Distillation via Relationship Matching
Han-Jia Ye
Su Lu
De-Chuan Zhan
FedML
22
20
0
04 May 2022
Proto2Proto: Can you recognize the car, the way I do?
Monish Keswani
Sriranjani Ramakrishnan
Nishant Reddy
V. Balasubramanian
13
26
0
25 Apr 2022
Selective Cross-Task Distillation
Su Lu
Han-Jia Ye
De-Chuan Zhan
31
0
0
25 Apr 2022
Knowledge Distillation with the Reused Teacher Classifier
Defang Chen
Jianhan Mei
Hailin Zhang
C. Wang
Yan Feng
Chun-Yen Chen
36
166
0
26 Mar 2022
Lightweight Graph Convolutional Networks with Topologically Consistent Magnitude Pruning
H. Sahbi
GNN
15
1
0
25 Mar 2022
MKQ-BERT: Quantized BERT with 4-bits Weights and Activations
Hanlin Tang
Xipeng Zhang
Kai Liu
Jianchen Zhu
Zhanhui Kang
VLM
MQ
24
15
0
25 Mar 2022
Domain Generalization by Mutual-Information Regularization with Pre-trained Models
Junbum Cha
Kyungjae Lee
Sungrae Park
Sanghyuk Chun
OOD
26
131
0
21 Mar 2022
Cross-Modal Perceptionist: Can Face Geometry be Gleaned from Voices?
Cho-Ying Wu
Chin-Cheng Hsu
Ulrich Neumann
CVBM
8
14
0
18 Mar 2022
Graph Flow: Cross-layer Graph Flow Distillation for Dual Efficient Medical Image Segmentation
Wen Zou
Muyi Sun
41
18
0
16 Mar 2022
Wavelet Knowledge Distillation: Towards Efficient Image-to-Image Translation
Linfeng Zhang
Xin Chen
Xiaobing Tu
Pengfei Wan
N. Xu
Kaisheng Ma
16
62
0
12 Mar 2022
Knowledge Distillation as Efficient Pre-training: Faster Convergence, Higher Data-efficiency, and Better Transferability
Ruifei He
Shuyang Sun
Jihan Yang
Song Bai
Xiaojuan Qi
34
36
0
10 Mar 2022
Extracting Effective Subnetworks with Gumbel-Softmax
Robin Dupont
M. Alaoui
H. Sahbi
A. Lebois
22
6
0
25 Feb 2022
Learn From the Past: Experience Ensemble Knowledge Distillation
Chaofei Wang
Shaowei Zhang
S. Song
Gao Huang
33
4
0
25 Feb 2022
Meta Knowledge Distillation
Jihao Liu
Boxiao Liu
Hongsheng Li
Yu Liu
18
25
0
16 Feb 2022
Knowledge Distillation with Deep Supervision
Shiya Luo
Defang Chen
Can Wang
21
1
0
16 Feb 2022
Distillation with Contrast is All You Need for Self-Supervised Point Cloud Representation Learning
Kexue Fu
Peng Gao
Renrui Zhang
Hongsheng Li
Yu Qiao
Manning Wang
SSL
3DPC
28
23
0
09 Feb 2022
Learning Representation from Neural Fisher Kernel with Low-rank Approximation
Ruixiang Zhang
Shuangfei Zhai
Etai Littwin
J. Susskind
SSL
36
3
0
04 Feb 2022
Adaptive Instance Distillation for Object Detection in Autonomous Driving
Qizhen Lan
Qing Tian
31
7
0
26 Jan 2022
Contrastive Neighborhood Alignment
Pengkai Zhu
Zhaowei Cai
Yuanjun Xiong
Zhuowen Tu
Luis Goncalves
Vijay Mahadevan
Stefano Soatto
18
2
0
06 Jan 2022
Confidence-Aware Multi-Teacher Knowledge Distillation
Hailin Zhang
Defang Chen
Can Wang
12
64
0
30 Dec 2021
Previous
1
2
3
4
5
6
7
Next