ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.05835
  4. Cited By
Variational Information Distillation for Knowledge Transfer

Variational Information Distillation for Knowledge Transfer

11 April 2019
Sungsoo Ahn
S. Hu
Andreas C. Damianou
Neil D. Lawrence
Zhenwen Dai
ArXivPDFHTML

Papers citing "Variational Information Distillation for Knowledge Transfer"

50 / 321 papers shown
Title
Pixel Distillation: A New Knowledge Distillation Scheme for
  Low-Resolution Image Recognition
Pixel Distillation: A New Knowledge Distillation Scheme for Low-Resolution Image Recognition
Guangyu Guo
Dingwen Zhang
Longfei Han
Nian Liu
Ming-Ming Cheng
Junwei Han
26
2
0
17 Dec 2021
Information Theoretic Representation Distillation
Information Theoretic Representation Distillation
Roy Miles
Adrian Lopez-Rodriguez
K. Mikolajczyk
MQ
13
21
0
01 Dec 2021
Improved Knowledge Distillation via Adversarial Collaboration
Improved Knowledge Distillation via Adversarial Collaboration
Zhiqiang Liu
Chengkai Huang
Yanxia Liu
29
2
0
29 Nov 2021
Semi-Online Knowledge Distillation
Semi-Online Knowledge Distillation
Zhiqiang Liu
Yanxia Liu
Chengkai Huang
19
5
0
23 Nov 2021
Learning to Disentangle Scenes for Person Re-identification
Learning to Disentangle Scenes for Person Re-identification
Xianghao Zang
Gezhong Li
Wei-Nan Gao
Xiujun Shu
24
33
0
10 Nov 2021
A Survey on Green Deep Learning
A Survey on Green Deep Learning
Jingjing Xu
Wangchunshu Zhou
Zhiyi Fu
Hao Zhou
Lei Li
VLM
81
83
0
08 Nov 2021
AUTOKD: Automatic Knowledge Distillation Into A Student Architecture
  Family
AUTOKD: Automatic Knowledge Distillation Into A Student Architecture Family
Roy Henha Eyono
Fabio Maria Carlucci
P. Esperança
Binxin Ru
Phillip Torr
24
3
0
05 Nov 2021
Estimating and Maximizing Mutual Information for Knowledge Distillation
Estimating and Maximizing Mutual Information for Knowledge Distillation
A. Shrivastava
Yanjun Qi
Vicente Ordonez
21
5
0
29 Oct 2021
Applications and Techniques for Fast Machine Learning in Science
Applications and Techniques for Fast Machine Learning in Science
A. Deiana
Nhan Tran
Joshua C. Agar
Michaela Blott
G. D. Guglielmo
...
Ashish Sharma
S. Summers
Pietro Vischia
J. Vlimant
Olivia Weng
14
71
0
25 Oct 2021
MUSE: Feature Self-Distillation with Mutual Information and
  Self-Information
MUSE: Feature Self-Distillation with Mutual Information and Self-Information
Yunpeng Gong
Ye Yu
Gaurav Mittal
Greg Mori
Mei Chen
SSL
30
2
0
25 Oct 2021
Model Composition: Can Multiple Neural Networks Be Combined into a
  Single Network Using Only Unlabeled Data?
Model Composition: Can Multiple Neural Networks Be Combined into a Single Network Using Only Unlabeled Data?
Amin Banitalebi-Dehkordi
Xinyu Kang
Yong Zhang
MoMe
22
2
0
20 Oct 2021
A Variational Bayesian Approach to Learning Latent Variables for
  Acoustic Knowledge Transfer
A Variational Bayesian Approach to Learning Latent Variables for Acoustic Knowledge Transfer
Hu Hu
Sabato Marco Siniscalchi
Chao-Han Huck Yang
Chin-Hui Lee
BDL
43
6
0
16 Oct 2021
Towards Streaming Egocentric Action Anticipation
Towards Streaming Egocentric Action Anticipation
Antonino Furnari
G. Farinella
EgoV
33
6
0
11 Oct 2021
Towards Data-Free Domain Generalization
Towards Data-Free Domain Generalization
A. Frikha
Haokun Chen
Denis Krompass
Thomas Runkler
Volker Tresp
OOD
41
14
0
09 Oct 2021
Student Helping Teacher: Teacher Evolution via Self-Knowledge
  Distillation
Student Helping Teacher: Teacher Evolution via Self-Knowledge Distillation
Zheng Li
Xiang Li
Lingfeng Yang
Jian Yang
Zhigeng Pan
26
2
0
01 Oct 2021
Deep Neural Compression Via Concurrent Pruning and Self-Distillation
Deep Neural Compression Via Concurrent Pruning and Self-Distillation
J. Ó. Neill
Sourav Dutta
H. Assem
VLM
21
5
0
30 Sep 2021
A Systematic Survey of Deep Learning-based Single-Image Super-Resolution
A Systematic Survey of Deep Learning-based Single-Image Super-Resolution
Juncheng Li
Zehua Pei
Wenjie Li
Guangwei Gao
Longguang Wang
Yingqian Wang
T. Zeng
31
46
0
29 Sep 2021
Partial to Whole Knowledge Distillation: Progressive Distilling
  Decomposed Knowledge Boosts Student Better
Partial to Whole Knowledge Distillation: Progressive Distilling Decomposed Knowledge Boosts Student Better
Xuanyang Zhang
Xinming Zhang
Jian Sun
25
1
0
26 Sep 2021
Knowledge Distillation Using Hierarchical Self-Supervision Augmented
  Distribution
Knowledge Distillation Using Hierarchical Self-Supervision Augmented Distribution
Chuanguang Yang
Zhulin An
Linhang Cai
Yongjun Xu
30
15
0
07 Sep 2021
Full-Cycle Energy Consumption Benchmark for Low-Carbon Computer Vision
Full-Cycle Energy Consumption Benchmark for Low-Carbon Computer Vision
Bo-wen Li
Xinyang Jiang
Donglin Bai
Yuge Zhang
Ningxin Zheng
Xuanyi Dong
Lu Liu
Yuqing Yang
Dongsheng Li
14
10
0
30 Aug 2021
An Information Theory-inspired Strategy for Automatic Network Pruning
An Information Theory-inspired Strategy for Automatic Network Pruning
Xiawu Zheng
Yuexiao Ma
Teng Xi
Gang Zhang
Errui Ding
Yuchao Li
Jie Chen
Yonghong Tian
Rongrong Ji
46
13
0
19 Aug 2021
Multi-granularity for knowledge distillation
Multi-granularity for knowledge distillation
Baitan Shao
Ying Chen
22
3
0
15 Aug 2021
Semi-Supervised Domain Generalizable Person Re-Identification
Semi-Supervised Domain Generalizable Person Re-Identification
Lingxiao He
Wu Liu
Jian Liang
Kecheng Zheng
Xingyu Liao
Peng Cheng
Tao Mei
OOD
24
15
0
11 Aug 2021
Hierarchical Self-supervised Augmented Knowledge Distillation
Hierarchical Self-supervised Augmented Knowledge Distillation
Chuanguang Yang
Zhulin An
Linhang Cai
Yongjun Xu
SSL
32
76
0
29 Jul 2021
Weight Reparametrization for Budget-Aware Network Pruning
Weight Reparametrization for Budget-Aware Network Pruning
Robin Dupont
H. Sahbi
Guillaume Michel
21
1
0
08 Jul 2021
Novel Visual Category Discovery with Dual Ranking Statistics and Mutual
  Knowledge Distillation
Novel Visual Category Discovery with Dual Ranking Statistics and Mutual Knowledge Distillation
Bingchen Zhao
Kai Han
26
106
0
07 Jul 2021
On The Distribution of Penultimate Activations of Classification
  Networks
On The Distribution of Penultimate Activations of Classification Networks
Minkyo Seo
Yoonho Lee
Suha Kwak
UQCV
18
4
0
05 Jul 2021
Revisiting Knowledge Distillation: An Inheritance and Exploration
  Framework
Revisiting Knowledge Distillation: An Inheritance and Exploration Framework
Zhen Huang
Xu Shen
Jun Xing
Tongliang Liu
Xinmei Tian
Houqiang Li
Bing Deng
Jianqiang Huang
Xiansheng Hua
30
27
0
01 Jul 2021
Self-Contrastive Learning: Single-viewed Supervised Contrastive
  Framework using Sub-network
Self-Contrastive Learning: Single-viewed Supervised Contrastive Framework using Sub-network
Sangmin Bae
Sungnyun Kim
Jongwoo Ko
Gihun Lee
SeungJong Noh
Se-Young Yun
SSL
28
6
0
29 Jun 2021
ACN: Adversarial Co-training Network for Brain Tumor Segmentation with
  Missing Modalities
ACN: Adversarial Co-training Network for Brain Tumor Segmentation with Missing Modalities
Yixin Wang
Yang Zhang
Yang Liu
Zihao Lin
Jiang Tian
Cheng Zhong
Zhongchao Shi
Jianping Fan
Zhiqiang He
24
68
0
28 Jun 2021
Midpoint Regularization: from High Uncertainty Training to Conservative
  Classification
Midpoint Regularization: from High Uncertainty Training to Conservative Classification
Hongyu Guo
23
3
0
26 Jun 2021
Dual-Teacher Class-Incremental Learning With Data-Free Generative Replay
Dual-Teacher Class-Incremental Learning With Data-Free Generative Replay
Yoojin Choi
Mostafa El-Khamy
Jungwon Lee
CLL
27
41
0
17 Jun 2021
BERT Learns to Teach: Knowledge Distillation with Meta Learning
BERT Learns to Teach: Knowledge Distillation with Meta Learning
Wangchunshu Zhou
Canwen Xu
Julian McAuley
31
87
0
08 Jun 2021
Fair Feature Distillation for Visual Recognition
Fair Feature Distillation for Visual Recognition
S. Jung
Donggyu Lee
Taeeon Park
Taesup Moon
27
75
0
27 May 2021
High-Frequency aware Perceptual Image Enhancement
High-Frequency aware Perceptual Image Enhancement
Hyungmin Roh
Myung-joo Kang
21
0
0
25 May 2021
Undistillable: Making A Nasty Teacher That CANNOT teach students
Undistillable: Making A Nasty Teacher That CANNOT teach students
Haoyu Ma
Tianlong Chen
Ting-Kuei Hu
Chenyu You
Xiaohui Xie
Zhangyang Wang
19
41
0
16 May 2021
Interpretable Embedding Procedure Knowledge Transfer via Stacked
  Principal Component Analysis and Graph Neural Network
Interpretable Embedding Procedure Knowledge Transfer via Stacked Principal Component Analysis and Graph Neural Network
Seunghyun Lee
B. Song
21
6
0
28 Apr 2021
Voice2Mesh: Cross-Modal 3D Face Model Generation from Voices
Voice2Mesh: Cross-Modal 3D Face Model Generation from Voices
Cho-Ying Wu
Ke Xu
Chin-Cheng Hsu
Ulrich Neumann
CVBM
3DH
50
4
0
21 Apr 2021
Distilling Knowledge via Knowledge Review
Distilling Knowledge via Knowledge Review
Pengguang Chen
Shu Liu
Hengshuang Zhao
Jiaya Jia
155
423
0
19 Apr 2021
Data-Free Knowledge Distillation with Soft Targeted Transfer Set
  Synthesis
Data-Free Knowledge Distillation with Soft Targeted Transfer Set Synthesis
Zehao Wang
23
30
0
10 Apr 2021
Learning from 2D: Contrastive Pixel-to-Point Knowledge Transfer for 3D
  Pretraining
Learning from 2D: Contrastive Pixel-to-Point Knowledge Transfer for 3D Pretraining
Yueh-Cheng Liu
Yu-Kai Huang
HungYueh Chiang
Hung-Ting Su
Zhe-Yu Liu
Chin-Tang Chen
Ching-Yu Tseng
Winston H. Hsu
3DPC
25
35
0
10 Apr 2021
Distilling and Transferring Knowledge via cGAN-generated Samples for
  Image Classification and Regression
Distilling and Transferring Knowledge via cGAN-generated Samples for Image Classification and Regression
Xin Ding
Z. J. Wang
Zuheng Xu
Z. Jane Wang
William J. Welch
41
22
0
07 Apr 2021
The Multi-Agent Behavior Dataset: Mouse Dyadic Social Interactions
The Multi-Agent Behavior Dataset: Mouse Dyadic Social Interactions
Jennifer J. Sun
Tomomi Karigo
Dipam Chakraborty
Sharada Mohanty
Benjamin Wild
...
Chen Chen
D. Anderson
Pietro Perona
Yisong Yue
Ann Kennedy
32
47
0
06 Apr 2021
Compressing Visual-linguistic Model via Knowledge Distillation
Compressing Visual-linguistic Model via Knowledge Distillation
Zhiyuan Fang
Jianfeng Wang
Xiaowei Hu
Lijuan Wang
Yezhou Yang
Zicheng Liu
VLM
39
97
0
05 Apr 2021
Complementary Relation Contrastive Distillation
Complementary Relation Contrastive Distillation
Jinguo Zhu
Shixiang Tang
Dapeng Chen
Shijie Yu
Yakun Liu
A. Yang
M. Rong
Xiaohua Wang
27
77
0
29 Mar 2021
Embedding Transfer with Label Relaxation for Improved Metric Learning
Embedding Transfer with Label Relaxation for Improved Metric Learning
Sungyeon Kim
Dongwon Kim
Minsu Cho
Suha Kwak
32
32
0
27 Mar 2021
Deep Ensemble Collaborative Learning by using Knowledge-transfer Graph
  for Fine-grained Object Classification
Deep Ensemble Collaborative Learning by using Knowledge-transfer Graph for Fine-grained Object Classification
Naoki Okamoto
Soma Minami
Tsubasa Hirakawa
Takayoshi Yamashita
H. Fujiyoshi
FedML
21
2
0
27 Mar 2021
Distilling Object Detectors via Decoupled Features
Distilling Object Detectors via Decoupled Features
Jianyuan Guo
Kai Han
Yunhe Wang
Han Wu
Xinghao Chen
Chunjing Xu
Chang Xu
43
199
0
26 Mar 2021
Student Network Learning via Evolutionary Knowledge Distillation
Student Network Learning via Evolutionary Knowledge Distillation
Kangkai Zhang
Chunhui Zhang
Shikun Li
Dan Zeng
Shiming Ge
22
83
0
23 Mar 2021
Compacting Deep Neural Networks for Internet of Things: Methods and
  Applications
Compacting Deep Neural Networks for Internet of Things: Methods and Applications
Ke Zhang
Hanbo Ying
Hongning Dai
Lin Li
Yuangyuang Peng
Keyi Guo
Hongfang Yu
21
38
0
20 Mar 2021
Previous
1234567
Next