ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.00384
  4. Cited By
Deep Mutual Learning

Deep Mutual Learning

1 June 2017
Ying Zhang
Tao Xiang
Timothy M. Hospedales
Huchuan Lu
    FedML
ArXivPDFHTML

Papers citing "Deep Mutual Learning"

50 / 222 papers shown
Title
Swapped Logit Distillation via Bi-level Teacher Alignment
Swapped Logit Distillation via Bi-level Teacher Alignment
Stephen Ekaputra Limantoro
Jhe-Hao Lin
Chih-Yu Wang
Yi-Lung Tsai
Hong-Han Shuai
Ching-Chun Huang
Wen-Huang Cheng
54
0
0
27 Apr 2025
Causality Enhanced Origin-Destination Flow Prediction in Data-Scarce Cities
Tao Feng
Yunke Zhang
Huandong Wang
Yong Li
153
0
0
09 Mar 2025
Rethinking Knowledge in Distillation: An In-context Sample Retrieval Perspective
Rethinking Knowledge in Distillation: An In-context Sample Retrieval Perspective
Jinjing Zhu
Songze Li
Lin Wang
47
0
0
13 Jan 2025
Knowledge Distillation with Adapted Weight
Sirong Wu
Xi Luo
Junjie Liu
Yuhui Deng
40
0
0
06 Jan 2025
GazeGen: Gaze-Driven User Interaction for Visual Content Generation
GazeGen: Gaze-Driven User Interaction for Visual Content Generation
He-Yen Hsieh
Ziyun Li
Sai Qian Zhang
W. Ting
Kao-Den Chang
B. D. Salvo
Chiao Liu
H. T. Kung
VGen
35
0
0
07 Nov 2024
Multiple Information Prompt Learning for Cloth-Changing Person Re-Identification
Multiple Information Prompt Learning for Cloth-Changing Person Re-Identification
Shengxun Wei
Zan Gao
Yibo Zhao
Weili Guan
Weili Guan
Shengyong Chen
46
1
0
01 Nov 2024
Browsing without Third-Party Cookies: What Do You See?
Browsing without Third-Party Cookies: What Do You See?
Maxwell Lin
Shihan Lin
Helen Wu
Karen Wang
Xiaowei Yang
BDL
53
0
0
14 Oct 2024
Classroom-Inspired Multi-Mentor Distillation with Adaptive Learning Strategies
Classroom-Inspired Multi-Mentor Distillation with Adaptive Learning Strategies
Shalini Sarode
Muhammad Saif Ullah Khan
Tahira Shehzadi
Didier Stricker
Muhammad Zeshan Afzal
36
0
0
30 Sep 2024
Harmonizing knowledge Transfer in Neural Network with Unified
  Distillation
Harmonizing knowledge Transfer in Neural Network with Unified Distillation
Yaomin Huang
Zaomin Yan
Chaomin Shen
Faming Fang
Guixu Zhang
29
0
0
27 Sep 2024
Online Multi-level Contrastive Representation Distillation for
  Cross-Subject fNIRS Emotion Recognition
Online Multi-level Contrastive Representation Distillation for Cross-Subject fNIRS Emotion Recognition
Zhili Lai
Chunmei Qing
Junpeng Tan
Wanxiang Luo
Xiangmin Xu
21
1
0
24 Sep 2024
Deep Companion Learning: Enhancing Generalization Through Historical
  Consistency
Deep Companion Learning: Enhancing Generalization Through Historical Consistency
Ruizhao Zhu
Venkatesh Saligrama
FedML
29
0
0
26 Jul 2024
Deep Mutual Learning among Partially Labeled Datasets for Multi-Organ
  Segmentation
Deep Mutual Learning among Partially Labeled Datasets for Multi-Organ Segmentation
Xiaoyu Liu
Linhao Qu
Ziyue Xie
Yonghong Shi
Zhijian Song
41
1
0
17 Jul 2024
Continual Learning with Diffusion-based Generative Replay for Industrial
  Streaming Data
Continual Learning with Diffusion-based Generative Replay for Industrial Streaming Data
Jiayi He
Jiao Chen
Qianmiao Liu
Suyan Dai
Jianhua Tang
Dongpo Liu
DiffM
AI4CE
32
4
0
22 Jun 2024
DistilDoc: Knowledge Distillation for Visually-Rich Document Applications
DistilDoc: Knowledge Distillation for Visually-Rich Document Applications
Jordy Van Landeghem
Subhajit Maity
Ayan Banerjee
Matthew Blaschko
Marie-Francine Moens
Josep Lladós
Sanket Biswas
48
2
0
12 Jun 2024
Overcoming Data and Model Heterogeneities in Decentralized Federated Learning via Synthetic Anchors
Overcoming Data and Model Heterogeneities in Decentralized Federated Learning via Synthetic Anchors
Chun-Yin Huang
Kartik Srinivas
Xin Zhang
Xiaoxiao Li
DD
50
6
0
19 May 2024
The Curse of Diversity in Ensemble-Based Exploration
The Curse of Diversity in Ensemble-Based Exploration
Zhixuan Lin
P. DÓro
Evgenii Nikishin
Aaron C. Courville
42
1
0
07 May 2024
CrossMatch: Enhance Semi-Supervised Medical Image Segmentation with
  Perturbation Strategies and Knowledge Distillation
CrossMatch: Enhance Semi-Supervised Medical Image Segmentation with Perturbation Strategies and Knowledge Distillation
Bin Zhao
Chunshi Wang
Shuxue Ding
38
2
0
01 May 2024
CKD: Contrastive Knowledge Distillation from A Sample-wise Perspective
CKD: Contrastive Knowledge Distillation from A Sample-wise Perspective
Wencheng Zhu
Xin Zhou
Pengfei Zhu
Yu Wang
Qinghua Hu
VLM
56
1
0
22 Apr 2024
Practical Insights into Knowledge Distillation for Pre-Trained Models
Practical Insights into Knowledge Distillation for Pre-Trained Models
Norah Alballa
Marco Canini
42
2
0
22 Feb 2024
Predictive Churn with the Set of Good Models
Predictive Churn with the Set of Good Models
J. Watson-Daniels
Flavio du Pin Calmon
Alexander DÁmour
Carol Xuan Long
David C. Parkes
Berk Ustun
83
7
0
12 Feb 2024
P2Seg: Pointly-supervised Segmentation via Mutual Distillation
P2Seg: Pointly-supervised Segmentation via Mutual Distillation
Zipeng Wang
Xuehui Yu
Xumeng Han
Wenwen Yu
Zhixun Huang
Jianbin Jiao
Zhenjun Han
25
0
0
18 Jan 2024
Mutual Distillation Learning For Person Re-Identification
Mutual Distillation Learning For Person Re-Identification
Huiyuan Fu
Kuilong Cui
Chuanming Wang
Mengshi Qi
Huadong Ma
40
0
0
12 Jan 2024
Semi-supervised learning via DQN for log anomaly detection
Semi-supervised learning via DQN for log anomaly detection
Yingying He
Xiaobing Pei
Lihong Shen
30
1
0
06 Jan 2024
Decoupled Knowledge with Ensemble Learning for Online Distillation
Decoupled Knowledge with Ensemble Learning for Online Distillation
Baitan Shao
Ying Chen
23
0
0
18 Dec 2023
Cooperative Learning for Cost-Adaptive Inference
Cooperative Learning for Cost-Adaptive Inference
Xingli Fang
Richard M. Bradford
Jung-Eun Kim
29
1
0
13 Dec 2023
Choosing Wisely and Learning Deeply: Selective Cross-Modality
  Distillation via CLIP for Domain Generalization
Choosing Wisely and Learning Deeply: Selective Cross-Modality Distillation via CLIP for Domain Generalization
Jixuan Leng
Yijiang Li
Haohan Wang
VLM
31
0
0
26 Nov 2023
A Transformer-Based Model With Self-Distillation for Multimodal Emotion
  Recognition in Conversations
A Transformer-Based Model With Self-Distillation for Multimodal Emotion Recognition in Conversations
Hui Ma
Jian Wang
Hongfei Lin
Bo Zhang
Yijia Zhang
Bo Xu
23
40
0
31 Oct 2023
Understanding the Effects of Projectors in Knowledge Distillation
Understanding the Effects of Projectors in Knowledge Distillation
Yudong Chen
Sen Wang
Jiajun Liu
Xuwei Xu
Frank de Hoog
Brano Kusy
Zi Huang
26
0
0
26 Oct 2023
Expression Syntax Information Bottleneck for Math Word Problems
Expression Syntax Information Bottleneck for Math Word Problems
Jing Xiong
Chengming Li
Min Yang
Xiping Hu
Bin Hu
25
5
0
24 Oct 2023
Dual Compensation Residual Networks for Class Imbalanced Learning
Dual Compensation Residual Networks for Class Imbalanced Learning
Rui Hou
Hong Chang
Bingpeng Ma
Shiguang Shan
Xilin Chen
20
5
0
25 Aug 2023
Multi-Label Knowledge Distillation
Multi-Label Knowledge Distillation
Penghui Yang
Ming-Kun Xie
Chen-Chen Zong
Lei Feng
Gang Niu
Masashi Sugiyama
Sheng-Jun Huang
33
10
0
12 Aug 2023
Multi-View Fusion and Distillation for Subgrade Distresses Detection
  based on 3D-GPR
Multi-View Fusion and Distillation for Subgrade Distresses Detection based on 3D-GPR
Chunpeng Zhou
Kang Ning
Haishuai Wang
Zhi Yu
Sheng Zhou
Jiajun Bu
18
1
0
09 Aug 2023
Teacher-Student Architecture for Knowledge Distillation: A Survey
Teacher-Student Architecture for Knowledge Distillation: A Survey
Chengming Hu
Xuan Li
Danyang Liu
Haolun Wu
Xi Chen
Ju Wang
Xue Liu
21
16
0
08 Aug 2023
SwinMM: Masked Multi-view with Swin Transformers for 3D Medical Image
  Segmentation
SwinMM: Masked Multi-view with Swin Transformers for 3D Medical Image Segmentation
Yiqing Wang
Zihan Li
Jieru Mei
Zi-Ying Wei
Li Liu
Chen Wang
Shengtian Sang
Alan Yuille
Cihang Xie
Yuyin Zhou
ViT
MedIm
20
31
0
24 Jul 2023
Frameless Graph Knowledge Distillation
Frameless Graph Knowledge Distillation
Dai Shi
Zhiqi Shao
Yi Guo
Junbin Gao
34
4
0
13 Jul 2023
CrossKD: Cross-Head Knowledge Distillation for Object Detection
CrossKD: Cross-Head Knowledge Distillation for Object Detection
Jiabao Wang
Yuming Chen
Zhaohui Zheng
Xiang Li
Ming-Ming Cheng
Qibin Hou
40
32
0
20 Jun 2023
Towards Higher Pareto Frontier in Multilingual Machine Translation
Towards Higher Pareto Frontier in Multilingual Machine Translation
Yi-Chong Huang
Xiaocheng Feng
Xinwei Geng
Baohang Li
Bing Qin
33
9
0
25 May 2023
Decoupled Kullback-Leibler Divergence Loss
Decoupled Kullback-Leibler Divergence Loss
Jiequan Cui
Zhuotao Tian
Zhisheng Zhong
Xiaojuan Qi
Bei Yu
Hanwang Zhang
39
38
0
23 May 2023
Is Synthetic Data From Diffusion Models Ready for Knowledge
  Distillation?
Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation?
Zheng Li
Yuxuan Li
Penghai Zhao
Renjie Song
Xiang Li
Jian Yang
29
19
0
22 May 2023
Self-Distillation with Meta Learning for Knowledge Graph Completion
Self-Distillation with Meta Learning for Knowledge Graph Completion
Yunshui Li
Junhao Liu
Chengming Li
Min Yang
21
5
0
20 May 2023
Student-friendly Knowledge Distillation
Student-friendly Knowledge Distillation
Mengyang Yuan
Bo Lang
Fengnan Quan
18
17
0
18 May 2023
Tailoring Instructions to Student's Learning Levels Boosts Knowledge
  Distillation
Tailoring Instructions to Student's Learning Levels Boosts Knowledge Distillation
Yuxin Ren
Zi-Qi Zhong
Xingjian Shi
Yi Zhu
Chun Yuan
Mu Li
21
7
0
16 May 2023
GeNAS: Neural Architecture Search with Better Generalization
GeNAS: Neural Architecture Search with Better Generalization
Joonhyun Jeong
Joonsang Yu
Geondo Park
Dongyoon Han
Y. Yoo
25
4
0
15 May 2023
EAML: Ensemble Self-Attention-based Mutual Learning Network for Document
  Image Classification
EAML: Ensemble Self-Attention-based Mutual Learning Network for Document Image Classification
Souhail Bakkali
Zuheng Ming
Mickael Coustaty
Marçal Rusiñol
10
6
0
11 May 2023
CORSD: Class-Oriented Relational Self Distillation
CORSD: Class-Oriented Relational Self Distillation
Muzhou Yu
S. Tan
Kailu Wu
Runpei Dong
Linfeng Zhang
Kaisheng Ma
24
0
0
28 Apr 2023
Self-discipline on multiple channels
Self-discipline on multiple channels
Jiutian Zhao
Liangchen Luo
Hao Wang
19
0
0
27 Apr 2023
Function-Consistent Feature Distillation
Function-Consistent Feature Distillation
Dongyang Liu
Meina Kan
Shiguang Shan
Xilin Chen
44
18
0
24 Apr 2023
Grouped Knowledge Distillation for Deep Face Recognition
Grouped Knowledge Distillation for Deep Face Recognition
Weisong Zhao
Xiangyu Zhu
Kaiwen Guo
Xiaoyu Zhang
Zhen Lei
CVBM
15
6
0
10 Apr 2023
Generalization Matters: Loss Minima Flattening via Parameter
  Hybridization for Efficient Online Knowledge Distillation
Generalization Matters: Loss Minima Flattening via Parameter Hybridization for Efficient Online Knowledge Distillation
Tianli Zhang
Mengqi Xue
Jiangtao Zhang
Haofei Zhang
Yu Wang
Lechao Cheng
Jie Song
Mingli Song
28
5
0
26 Mar 2023
Focus on Your Target: A Dual Teacher-Student Framework for
  Domain-adaptive Semantic Segmentation
Focus on Your Target: A Dual Teacher-Student Framework for Domain-adaptive Semantic Segmentation
Xinyue Huo
Lingxi Xie
Wen-gang Zhou
Houqiang Li
Qi Tian
26
8
0
16 Mar 2023
12345
Next