ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.08491
  4. Cited By
Revisiting Self-Distillation

Revisiting Self-Distillation

17 June 2022
M. Pham
Minsu Cho
Ameya Joshi
C. Hegde
ArXivPDFHTML

Papers citing "Revisiting Self-Distillation"

15 / 15 papers shown
Title
Quantifying Context Bias in Domain Adaptation for Object Detection
Quantifying Context Bias in Domain Adaptation for Object Detection
Hojun Son
Arpan Kusari
AI4CE
29
0
0
23 Sep 2024
An Efficient Self-Learning Framework For Interactive Spoken Dialog
  Systems
An Efficient Self-Learning Framework For Interactive Spoken Dialog Systems
Hitesh Tulsiani
David M. Chan
Shalini Ghosh
Garima Lalwani
Prabhat Pandey
Ankish Bansal
Sri Garimella
Ariya Rastrow
Björn Hoffmeister
31
0
0
16 Sep 2024
Gentle-CLIP: Exploring Aligned Semantic In Low-Quality Multimodal Data
  With Soft Alignment
Gentle-CLIP: Exploring Aligned Semantic In Low-Quality Multimodal Data With Soft Alignment
Zijia Song
Z. Zang
Yelin Wang
Guozheng Yang
Jiangbin Zheng
Kaicheng Yu
Wanyu Chen
Stan Z. Li
33
0
0
09 Jun 2024
FedDistill: Global Model Distillation for Local Model De-Biasing in
  Non-IID Federated Learning
FedDistill: Global Model Distillation for Local Model De-Biasing in Non-IID Federated Learning
Changlin Song
Divya Saxena
Jiannong Cao
Yuqing Zhao
FedML
34
3
0
14 Apr 2024
Align and Distill: Unifying and Improving Domain Adaptive Object Detection
Align and Distill: Unifying and Improving Domain Adaptive Object Detection
Justin Kay
T. Haucke
Suzanne Stathatos
Siqi Deng
Erik Young
Pietro Perona
Sara Beery
Grant Van Horn
53
4
0
18 Mar 2024
Towards a theory of model distillation
Towards a theory of model distillation
Enric Boix-Adserà
FedML
VLM
44
6
0
14 Mar 2024
Bayesian Optimization Meets Self-Distillation
Bayesian Optimization Meets Self-Distillation
HyunJae Lee
Heon Song
Hyeonsoo Lee
Gi-hyeon Lee
Suyeong Park
Donggeun Yoo
UQCV
BDL
21
1
0
25 Apr 2023
Simulated Annealing in Early Layers Leads to Better Generalization
Simulated Annealing in Early Layers Leads to Better Generalization
Amirm. Sarfi
Zahra Karimpour
Muawiz Chaudhary
N. Khalid
Mirco Ravanelli
Sudhir Mudur
Eugene Belilovsky
AI4CE
CLL
14
7
0
10 Apr 2023
UNFUSED: UNsupervised Finetuning Using SElf supervised Distillation
UNFUSED: UNsupervised Finetuning Using SElf supervised Distillation
Ashish Seth
Sreyan Ghosh
S. Umesh
Dinesh Manocha
25
0
0
10 Mar 2023
Understanding Self-Distillation in the Presence of Label Noise
Understanding Self-Distillation in the Presence of Label Noise
Rudrajit Das
Sujay Sanghavi
33
13
0
30 Jan 2023
On student-teacher deviations in distillation: does it pay to disobey?
On student-teacher deviations in distillation: does it pay to disobey?
Vaishnavh Nagarajan
A. Menon
Srinadh Bhojanapalli
H. Mobahi
Surinder Kumar
41
9
0
30 Jan 2023
Beyond Invariance: Test-Time Label-Shift Adaptation for Distributions
  with "Spurious" Correlations
Beyond Invariance: Test-Time Label-Shift Adaptation for Distributions with "Spurious" Correlations
Qingyao Sun
Kevin Murphy
S. Ebrahimi
Alexander DÁmour
OOD
VLM
24
9
0
28 Nov 2022
SADT: Combining Sharpness-Aware Minimization with Self-Distillation for
  Improved Model Generalization
SADT: Combining Sharpness-Aware Minimization with Self-Distillation for Improved Model Generalization
Masud An Nur Islam Fahim
Jani Boutellier
32
0
0
01 Nov 2022
Fast Yet Effective Speech Emotion Recognition with Self-distillation
Fast Yet Effective Speech Emotion Recognition with Self-distillation
Zhao Ren
Thanh Tam Nguyen
Yi Chang
Björn W. Schuller
15
11
0
26 Oct 2022
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
281
2,888
0
15 Sep 2016
1