ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.05131
  4. Cited By
SimReg: Regression as a Simple Yet Effective Tool for Self-supervised
  Knowledge Distillation

SimReg: Regression as a Simple Yet Effective Tool for Self-supervised Knowledge Distillation

13 January 2022
K. Navaneet
Soroush Abbasi Koohpayegani
Ajinkya Tejankar
Hamed Pirsiavash
ArXivPDFHTML

Papers citing "SimReg: Regression as a Simple Yet Effective Tool for Self-supervised Knowledge Distillation"

22 / 22 papers shown
Title
MoKD: Multi-Task Optimization for Knowledge Distillation
MoKD: Multi-Task Optimization for Knowledge Distillation
Zeeshan Hayder
A. Cheraghian
Lars Petersson
Mehrtash Harandi
VLM
54
0
0
13 May 2025
Simple Semi-supervised Knowledge Distillation from Vision-Language Models via $\mathbf{\texttt{D}}$ual-$\mathbf{\texttt{H}}$ead $\mathbf{\texttt{O}}$ptimization
Simple Semi-supervised Knowledge Distillation from Vision-Language Models via D\mathbf{\texttt{D}}Dual-H\mathbf{\texttt{H}}Head O\mathbf{\texttt{O}}Optimization
Seongjae Kang
Dong Bok Lee
Hyungjoon Jang
Sung Ju Hwang
VLM
57
0
0
12 May 2025
Multi-Token Enhancing for Vision Representation Learning
Multi-Token Enhancing for Vision Representation Learning
Zhong-Yu Li
Yu-Song Hu
Bo Yin
Ming-Ming Cheng
66
1
0
24 Nov 2024
On the Surprising Effectiveness of Attention Transfer for Vision
  Transformers
On the Surprising Effectiveness of Attention Transfer for Vision Transformers
Alexander C. Li
Yuandong Tian
B. Chen
Deepak Pathak
Xinlei Chen
40
0
0
14 Nov 2024
Simple Unsupervised Knowledge Distillation With Space Similarity
Simple Unsupervised Knowledge Distillation With Space Similarity
Aditya Singh
Haohan Wang
31
1
0
20 Sep 2024
Improving Text-guided Object Inpainting with Semantic Pre-inpainting
Improving Text-guided Object Inpainting with Semantic Pre-inpainting
Yifu Chen
Jingwen Chen
Yingwei Pan
Yehao Li
Ting Yao
Zhineng Chen
Tao Mei
DiffM
25
5
0
12 Sep 2024
ProFeAT: Projected Feature Adversarial Training for Self-Supervised
  Learning of Robust Representations
ProFeAT: Projected Feature Adversarial Training for Self-Supervised Learning of Robust Representations
Sravanti Addepalli
Priyam Dey
R. Venkatesh Babu
40
0
0
09 Jun 2024
$V_kD:$ Improving Knowledge Distillation using Orthogonal Projections
VkD:V_kD:Vk​D: Improving Knowledge Distillation using Orthogonal Projections
Roy Miles
Ismail Elezi
Jiankang Deng
44
10
0
10 Mar 2024
On Good Practices for Task-Specific Distillation of Large Pretrained
  Visual Models
On Good Practices for Task-Specific Distillation of Large Pretrained Visual Models
Juliette Marrie
Michael Arbel
Julien Mairal
Diane Larlus
VLM
MQ
40
1
0
17 Feb 2024
Plasticity-Optimized Complementary Networks for Unsupervised Continual
  Learning
Plasticity-Optimized Complementary Networks for Unsupervised Continual Learning
Alex Gomez-Villa
Bartlomiej Twardowski
Kai Wang
Joost van de Weijer
SSL
CLL
28
8
0
12 Sep 2023
Multi-Mode Online Knowledge Distillation for Self-Supervised Visual
  Representation Learning
Multi-Mode Online Knowledge Distillation for Self-Supervised Visual Representation Learning
Kaiyou Song
Jin Xie
Shanyi Zhang
Zimeng Luo
24
29
0
13 Apr 2023
Understanding the Role of the Projector in Knowledge Distillation
Understanding the Role of the Projector in Knowledge Distillation
Roy Miles
K. Mikolajczyk
19
21
0
20 Mar 2023
A Simple Recipe for Competitive Low-compute Self supervised Vision
  Models
A Simple Recipe for Competitive Low-compute Self supervised Vision Models
Quentin Duval
Ishan Misra
Nicolas Ballas
31
9
0
23 Jan 2023
Unifying Synergies between Self-supervised Learning and Dynamic
  Computation
Unifying Synergies between Self-supervised Learning and Dynamic Computation
Tarun Krishna
Ayush Rai
Alexandru Drimbarean
Eric Arazo
Paul Albert
A. Smeaton
Kevin McGuinness
Noel E. O'Connor
24
0
0
22 Jan 2023
Towards Sustainable Self-supervised Learning
Towards Sustainable Self-supervised Learning
Shanghua Gao
Pan Zhou
Mingg-Ming Cheng
Shuicheng Yan
CLL
40
7
0
20 Oct 2022
Effective Self-supervised Pre-training on Low-compute Networks without
  Distillation
Effective Self-supervised Pre-training on Low-compute Networks without Distillation
Fuwen Tan
F. Saleh
Brais Martínez
27
4
0
06 Oct 2022
Attention Distillation: self-supervised vision transformer students need
  more guidance
Attention Distillation: self-supervised vision transformer students need more guidance
Kai Wang
Fei Yang
Joost van de Weijer
ViT
17
16
0
03 Oct 2022
Bag of Instances Aggregation Boosts Self-supervised Distillation
Bag of Instances Aggregation Boosts Self-supervised Distillation
Haohang Xu
Jiemin Fang
Xiaopeng Zhang
Lingxi Xie
Xinggang Wang
Wenrui Dai
H. Xiong
Qi Tian
SSL
23
21
0
04 Jul 2021
SEED: Self-supervised Distillation For Visual Representation
SEED: Self-supervised Distillation For Visual Representation
Zhiyuan Fang
Jianfeng Wang
Lijuan Wang
Lei Zhang
Yezhou Yang
Zicheng Liu
SSL
236
190
0
12 Jan 2021
Improved Baselines with Momentum Contrastive Learning
Improved Baselines with Momentum Contrastive Learning
Xinlei Chen
Haoqi Fan
Ross B. Girshick
Kaiming He
SSL
267
3,369
0
09 Mar 2020
Boosting Self-Supervised Learning via Knowledge Transfer
Boosting Self-Supervised Learning via Knowledge Transfer
M. Noroozi
Ananth Vinjimoor
Paolo Favaro
Hamed Pirsiavash
SSL
209
292
0
01 May 2018
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision
  Applications
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
950
20,561
0
17 Apr 2017
1