Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2203.16172
Cited By
Self-Distillation from the Last Mini-Batch for Consistency Regularization
30 March 2022
Yiqing Shen
Liwu Xu
Yuzhe Yang
Yaqian Li
Yandong Guo
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Self-Distillation from the Last Mini-Batch for Consistency Regularization"
20 / 20 papers shown
Title
sDREAMER: Self-distilled Mixture-of-Modality-Experts Transformer for Automatic Sleep Staging
Jingyuan Chen
Yuan Yao
Mie Anderson
Natalie Hauglund
Celia Kjaerby
Verena Untiet
Maiken Nedergaard
Jiebo Luo
41
1
0
28 Jan 2025
Dynamic Self-Distillation via Previous Mini-batches for Fine-tuning Small Language Models
Y. Fu
Yin Yu
Xiaotian Han
Runchao Li
Xianxuan Long
Haotian Yu
Pan Li
SyDa
57
0
0
25 Nov 2024
CleaR: Towards Robust and Generalized Parameter-Efficient Fine-Tuning for Noisy Label Learning
Yeachan Kim
Junho Kim
SangKeun Lee
NoLa
AAML
27
2
0
31 Oct 2024
Reinforced Imitative Trajectory Planning for Urban Automated Driving
Di Zeng
Ling Zheng
Xiantong Yang
Yinong Li
16
0
0
21 Oct 2024
S2HPruner: Soft-to-Hard Distillation Bridges the Discretization Gap in Pruning
Weihao Lin
Shengji Tang
Chong Yu
Peng Ye
Tao Chen
11
0
0
09 Oct 2024
Self-Cooperation Knowledge Distillation for Novel Class Discovery
Yuzheng Wang
Zhaoyu Chen
Dingkang Yang
Yunquan Sun
Lizhe Qi
36
1
0
02 Jul 2024
A Comprehensive Review of Knowledge Distillation in Computer Vision
Sheikh Musa Kaleem
Tufail Rouf
Gousia Habib
Tausifa Jan Saleem
Brejesh Lall
VLM
17
12
0
01 Apr 2024
Enhanced Sparsification via Stimulative Training
Shengji Tang
Weihao Lin
Hancheng Ye
Peng Ye
Chong Yu
Baopu Li
Tao Chen
27
2
0
11 Mar 2024
Dynamic Prototype Adaptation with Distillation for Few-shot Point Cloud Segmentation
Jie Liu
Wenzhe Yin
Haochen Wang
Yunlu Chen
J. Sonke
E. Gavves
3DPC
17
2
0
29 Jan 2024
Boosting Residual Networks with Group Knowledge
Shengji Tang
Peng Ye
Baopu Li
Wei Lin
Tao Chen
Tong He
Chong Yu
Wanli Ouyang
25
2
0
26 Aug 2023
A Lightweight Approach for Network Intrusion Detection based on Self-Knowledge Distillation
Shuo Yang
Xinran Zheng
Zhengzhuo Xu
Xingjun Wang
14
7
0
09 Jul 2023
Categories of Response-Based, Feature-Based, and Relation-Based Knowledge Distillation
Chuanguang Yang
Xinqiang Yu
Zhulin An
Yongjun Xu
VLM
OffRL
76
21
0
19 Jun 2023
Lightweight Self-Knowledge Distillation with Multi-source Information Fusion
Xucong Wang
Pengchao Han
Lei Guo
20
1
0
16 May 2023
Self-discipline on multiple channels
Jiutian Zhao
Liangchen Luo
Hao Wang
11
0
0
27 Apr 2023
A Survey of Historical Learning: Learning Models with Learning History
Xiang Li
Ge Wu
Lingfeng Yang
Wenzhe Wang
Renjie Song
Jian Yang
MU
AI4TS
15
2
0
23 Mar 2023
AI-KD: Adversarial learning and Implicit regularization for self-Knowledge Distillation
Hyungmin Kim
Sungho Suh
Sunghyun Baek
Daehwan Kim
Daun Jeong
Hansang Cho
Junmo Kim
6
5
0
20 Nov 2022
Analyzing the Noise Robustness of Deep Neural Networks
Kelei Cao
Mengchen Liu
Hang Su
Jing Wu
Jun Zhu
Shixia Liu
AAML
46
87
0
26 Jan 2020
Knowledge Distillation by On-the-Fly Native Ensemble
Xu Lan
Xiatian Zhu
S. Gong
187
472
0
12 Jun 2018
Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results
Antti Tarvainen
Harri Valpola
OOD
MoMe
244
1,279
0
06 Mar 2017
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky
Jia Deng
Hao Su
J. Krause
S. Satheesh
...
A. Karpathy
A. Khosla
Michael S. Bernstein
Alexander C. Berg
Li Fei-Fei
VLM
ObjD
279
39,083
0
01 Sep 2014
1