ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.04147
  4. Cited By
Peer Collaborative Learning for Online Knowledge Distillation

Peer Collaborative Learning for Online Knowledge Distillation

7 June 2020
Guile Wu
S. Gong
    FedML
ArXivPDFHTML

Papers citing "Peer Collaborative Learning for Online Knowledge Distillation"

19 / 19 papers shown
Title
Cross-Modal and Uncertainty-Aware Agglomeration for Open-Vocabulary 3D Scene Understanding
Cross-Modal and Uncertainty-Aware Agglomeration for Open-Vocabulary 3D Scene Understanding
Jinlong Li
Cristiano Saltori
Fabio Poiesi
N. Sebe
162
0
0
20 Mar 2025
Online Multi-level Contrastive Representation Distillation for
  Cross-Subject fNIRS Emotion Recognition
Online Multi-level Contrastive Representation Distillation for Cross-Subject fNIRS Emotion Recognition
Zhili Lai
Chunmei Qing
Junpeng Tan
Wanxiang Luo
Xiangmin Xu
21
1
0
24 Sep 2024
The Curse of Diversity in Ensemble-Based Exploration
The Curse of Diversity in Ensemble-Based Exploration
Zhixuan Lin
P. DÓro
Evgenii Nikishin
Aaron C. Courville
42
1
0
07 May 2024
Decoupled Knowledge with Ensemble Learning for Online Distillation
Decoupled Knowledge with Ensemble Learning for Online Distillation
Baitan Shao
Ying Chen
23
0
0
18 Dec 2023
Teacher-Student Architecture for Knowledge Distillation: A Survey
Teacher-Student Architecture for Knowledge Distillation: A Survey
Chengming Hu
Xuan Li
Danyang Liu
Haolun Wu
Xi Chen
Ju Wang
Xue Liu
21
16
0
08 Aug 2023
Generalization Matters: Loss Minima Flattening via Parameter
  Hybridization for Efficient Online Knowledge Distillation
Generalization Matters: Loss Minima Flattening via Parameter Hybridization for Efficient Online Knowledge Distillation
Tianli Zhang
Mengqi Xue
Jiangtao Zhang
Haofei Zhang
Yu Wang
Lechao Cheng
Jie Song
Mingli Song
28
5
0
26 Mar 2023
Distilling Calibrated Student from an Uncalibrated Teacher
Distilling Calibrated Student from an Uncalibrated Teacher
Ishan Mishra
Sethu Vamsi Krishna
Deepak Mishra
FedML
32
2
0
22 Feb 2023
Rethinking Soft Label in Label Distribution Learning Perspective
Rethinking Soft Label in Label Distribution Learning Perspective
Seungbum Hong
Jihun Yoon
Bogyu Park
Min-Kook Choi
31
0
0
31 Jan 2023
Teacher-Student Architecture for Knowledge Learning: A Survey
Teacher-Student Architecture for Knowledge Learning: A Survey
Chengming Hu
Xuan Li
Dan Liu
Xi Chen
Ju Wang
Xue Liu
20
35
0
28 Oct 2022
Online Knowledge Distillation via Mutual Contrastive Learning for Visual
  Recognition
Online Knowledge Distillation via Mutual Contrastive Learning for Visual Recognition
Chuanguang Yang
Zhulin An
Helong Zhou
Fuzhen Zhuang
Yongjun Xu
Qian Zhang
33
50
0
23 Jul 2022
A General Multiple Data Augmentation Based Framework for Training Deep
  Neural Networks
A General Multiple Data Augmentation Based Framework for Training Deep Neural Networks
Bin Hu
Yu Sun
•. A. K. Qin
AI4CE
28
0
0
29 May 2022
Self-Distillation from the Last Mini-Batch for Consistency
  Regularization
Self-Distillation from the Last Mini-Batch for Consistency Regularization
Yiqing Shen
Liwu Xu
Yuzhe Yang
Yaqian Li
Yandong Guo
15
60
0
30 Mar 2022
Channel Self-Supervision for Online Knowledge Distillation
Channel Self-Supervision for Online Knowledge Distillation
Shixi Fan
Xuan Cheng
Xiaomin Wang
Chun Yang
Pan Deng
Minghui Liu
Jiali Deng
Meilin Liu
16
1
0
22 Mar 2022
Consensus Learning from Heterogeneous Objectives for One-Class
  Collaborative Filtering
Consensus Learning from Heterogeneous Objectives for One-Class Collaborative Filtering
SeongKu Kang
Dongha Lee
Wonbin Kweon
Junyoung Hwang
Hwanjo Yu
17
12
0
26 Feb 2022
Mutual Contrastive Learning for Visual Representation Learning
Mutual Contrastive Learning for Visual Representation Learning
Chuanguang Yang
Zhulin An
Linhang Cai
Yongjun Xu
VLM
SSL
97
75
0
26 Apr 2021
DICE: Diversity in Deep Ensembles via Conditional Redundancy Adversarial
  Estimation
DICE: Diversity in Deep Ensembles via Conditional Redundancy Adversarial Estimation
Alexandre Ramé
Matthieu Cord
FedML
45
51
0
14 Jan 2021
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
19
2,837
0
09 Jun 2020
Knowledge Distillation by On-the-Fly Native Ensemble
Knowledge Distillation by On-the-Fly Native Ensemble
Xu Lan
Xiatian Zhu
S. Gong
192
473
0
12 Jun 2018
Aggregated Residual Transformations for Deep Neural Networks
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Z. Tu
Kaiming He
297
10,216
0
16 Nov 2016
1