Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2010.14713
Cited By
CompRess: Self-Supervised Learning by Compressing Representations
Neural Information Processing Systems (NeurIPS), 2020
28 October 2020
Soroush Abbasi Koohpayegani
Ajinkya Tejankar
Hamed Pirsiavash
SSL
Re-assign community
ArXiv (abs)
PDF
HTML
Github (78★)
Papers citing
"CompRess: Self-Supervised Learning by Compressing Representations"
50 / 62 papers shown
Learning Task-Agnostic Representations through Multi-Teacher Distillation
Philippe Formont
Maxime Darrin
Banafsheh Karimian
Jackie Chi Kit Cheung
Eric Granger
Ismail Ben Ayed
Mohammadhadi Shateri
Pablo Piantanida
222
2
0
21 Oct 2025
Backdooring Self-Supervised Contrastive Learning by Noisy Alignment
Tuo Chen
Jie Gui
Minjing Dong
Ju Jia
Lanting Fang
Jian Liu
AAML
167
3
0
19 Aug 2025
PCoreSet: Effective Active Learning through Knowledge Distillation from Vision-Language Models
Seongjae Kang
Dong Bok Lee
Hyungjoon Jang
Dongseop Kim
Sung Ju Hwang
VLM
462
0
0
01 Jun 2025
Simple yet Effective Semi-supervised Knowledge Distillation from Vision-Language Models via Dual-Head Optimization
Seongjae Kang
Dong Bok Lee
Hyungjoon Jang
Sung Ju Hwang
VLM
517
1
0
12 May 2025
Dataset Distillation via Knowledge Distillation: Towards Efficient Self-Supervised Pre-Training of Deep Networks
International Conference on Learning Representations (ICLR), 2024
S. Joshi
Jiayi Ni
Baharan Mirzasoleiman
DD
773
4
0
03 Oct 2024
Simple Unsupervised Knowledge Distillation With Space Similarity
European Conference on Computer Vision (ECCV), 2024
Aditya Singh
Haohan Wang
402
6
0
20 Sep 2024
Federated Graph Semantic and Structural Learning
Wenke Huang
Guancheng Wan
Mang Ye
Bo Du
FedML
463
86
0
27 Jun 2024
Lightweight Model Pre-training via Language Guided Knowledge Distillation
Mingsheng Li
Lin Zhang
Mingzhen Zhu
Zilong Huang
Gang Yu
Jiayuan Fan
Tao Chen
271
4
0
17 Jun 2024
To Distill or Not to Distill? On the Robustness of Robust Knowledge Distillation
Annual Meeting of the Association for Computational Linguistics (ACL), 2024
Abdul Waheed
Karima Kadaoui
Muhammad Abdul-Mageed
VLM
258
6
0
06 Jun 2024
Relational Self-supervised Distillation with Compact Descriptors for Image Copy Detection
Juntae Kim
Sungwon Woo
Jongho Nang
611
1
0
28 May 2024
Retro: Reusing teacher projection head for efficient embedding distillation on Lightweight Models via Self-supervised Learning
Khanh-Binh Nguyen
Chae Jung Park
198
0
0
24 May 2024
An Experimental Study on Exploring Strong Lightweight Vision Transformers via Masked Image Modeling Pre-Training
Jin Gao
Shubo Lin
Shaoru Wang
Yutong Kou
Zeming Li
Liang Li
Congxuan Zhang
Xiaoqin Zhang
Yizheng Wang
Weiming Hu
355
9
0
18 Apr 2024
Visually Grounded Speech Models have a Mutual Exclusivity Bias
Leanne Nortje
Dan Oneaţă
Yevgen Matusevych
Herman Kamper
SSL
331
2
0
20 Mar 2024
On Good Practices for Task-Specific Distillation of Large Pretrained Visual Models
Juliette Marrie
Michael Arbel
Julien Mairal
Diane Larlus
VLM
MQ
346
2
0
17 Feb 2024
Deep Clustering with Diffused Sampling and Hardness-aware Self-distillation
Hai-Xin Zhang
Dong Huang
495
2
0
25 Jan 2024
I
2
^2
2
MD: 3D Action Representation Learning with Inter- and Intra-modal Mutual Distillation
Yunyao Mao
Jiajun Deng
Wen-gang Zhou
Zhenbo Lu
Wanli Ouyang
Houqiang Li
VLM
297
0
0
24 Oct 2023
A Machine Learning-oriented Survey on Tiny Machine Learning
IEEE Access (IEEE Access), 2023
Luigi Capogrosso
Federico Cunico
D. Cheng
Franco Fummi
Marco Cristani
SyDa
MU
455
94
0
21 Sep 2023
COMEDIAN: Self-Supervised Learning and Knowledge Distillation for Action Spotting using Transformers
J. Denize
Mykola Liashuha
Jaonary Rabarisoa
Astrid Orcesi
Romain Hérault
ViT
405
21
0
03 Sep 2023
Embrace Limited and Imperfect Training Datasets: Opportunities and Challenges in Plant Disease Recognition Using Deep Learning
Frontiers in Plant Science (Front. Plant Sci.), 2023
Mingle Xu
H. Kim
Jucheng Yang
A. Fuentes
Yao Meng
Sook Yoon
Taehyun Kim
D. Park
277
40
0
19 May 2023
Multi-Mode Online Knowledge Distillation for Self-Supervised Visual Representation Learning
Computer Vision and Pattern Recognition (CVPR), 2023
Kaiyou Song
Jin Xie
Shanyi Zhang
Zimeng Luo
426
37
0
13 Apr 2023
Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning
Computer Vision and Pattern Recognition (CVPR), 2023
Ajinkya Tejankar
Maziar Sanjabi
Qifan Wang
Sinong Wang
Hamed Firooz
Hamed Pirsiavash
L Tan
AAML
247
33
0
04 Apr 2023
Learning Common Rationale to Improve Self-Supervised Representation for Fine-Grained Visual Recognition Problems
Computer Vision and Pattern Recognition (CVPR), 2023
Yangyang Shu
Anton Van Den Hengel
Lingqiao Liu
SSL
265
28
0
03 Mar 2023
A Simple Recipe for Competitive Low-compute Self supervised Vision Models
Quentin Duval
Ishan Misra
Nicolas Ballas
306
11
0
23 Jan 2023
Unifying Synergies between Self-supervised Learning and Dynamic Computation
British Machine Vision Conference (BMVC), 2023
Tarun Krishna
Ayush K. Rai
Alexandru Drimbarean
Eric Arazo
Paul Albert
Alan F. Smeaton
Kevin McGuinness
Noel E. O'Connor
495
0
0
22 Jan 2023
Similarity Contrastive Estimation for Image and Video Soft Contrastive Self-Supervised Learning
Machine Vision and Applications (MVA), 2022
J. Denize
Jaonary Rabarisoa
Astrid Orcesi
Romain Hérault
SSL
302
6
0
21 Dec 2022
Distantly-Supervised Named Entity Recognition with Adaptive Teacher Learning and Fine-grained Student Ensemble
AAAI Conference on Artificial Intelligence (AAAI), 2022
Xiaoye Qu
Jun Zeng
Daizong Liu
Zhefeng Wang
Baoxing Huai
Pan Zhou
236
28
0
13 Dec 2022
ESTAS: Effective and Stable Trojan Attacks in Self-supervised Encoders with One Target Unlabelled Sample
Jiaqi Xue
Qiang Lou
AAML
230
11
0
20 Nov 2022
Pixel-Wise Contrastive Distillation
IEEE International Conference on Computer Vision (ICCV), 2022
Junqiang Huang
Zichao Guo
536
6
0
01 Nov 2022
Effective Self-supervised Pre-training on Low-compute Networks without Distillation
International Conference on Learning Representations (ICLR), 2022
Fuwen Tan
F. Saleh
Brais Martínez
303
4
0
06 Oct 2022
Attention Distillation: self-supervised vision transformer students need more guidance
British Machine Vision Conference (BMVC), 2022
Kai Wang
Fei Yang
Joost van de Weijer
ViT
184
22
0
03 Oct 2022
Slimmable Networks for Contrastive Self-supervised Learning
International Journal of Computer Vision (IJCV), 2022
Shuai Zhao
Xiaohan Wang
Linchao Zhu
Yi Yang
256
7
0
30 Sep 2022
A Pathologist-Informed Workflow for Classification of Prostate Glands in Histopathology
Alessandro Ferrero
Beatrice Knudsen
Deepika Sirohi
Ross T. Whitaker
292
0
0
27 Sep 2022
MimCo: Masked Image Modeling Pre-training with Contrastive Teacher
ACM Multimedia (ACM MM), 2022
Qiang-feng Zhou
Chaohui Yu
Haowen Luo
Zhibin Wang
Hao Li
VLM
397
29
0
07 Sep 2022
CMD: Self-supervised 3D Action Representation Learning with Cross-modal Mutual Distillation
European Conference on Computer Vision (ECCV), 2022
Yunyao Mao
Wen-gang Zhou
Zhenbo Lu
Jiajun Deng
Houqiang Li
450
73
0
26 Aug 2022
SatMAE: Pre-training Transformers for Temporal and Multi-Spectral Satellite Imagery
Neural Information Processing Systems (NeurIPS), 2022
Yezhen Cong
Samarth Khanna
Chenlin Meng
Patrick Liu
Erik Rozi
Yutong He
Marshall Burke
David B. Lobell
Stefano Ermon
ViT
575
481
0
17 Jul 2022
DSPNet: Towards Slimmable Pretrained Networks based on Discriminative Self-supervised Learning
Shaoru Wang
Zeming Li
Jin Gao
Liang Li
Weiming Hu
197
1
0
13 Jul 2022
Revisiting Label Smoothing and Knowledge Distillation Compatibility: What was Missing?
International Conference on Machine Learning (ICML), 2022
Keshigeyan Chandrasegaran
Ngoc-Trung Tran
Yunqing Zhao
Ngai-Man Cheung
322
52
0
29 Jun 2022
SimA: Simple Softmax-free Attention for Vision Transformers
IEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2022
Soroush Abbasi Koohpayegani
Hamed Pirsiavash
352
43
0
17 Jun 2022
A Closer Look at Self-Supervised Lightweight Vision Transformers
International Conference on Machine Learning (ICML), 2022
Shaoru Wang
Jin Gao
Zeming Li
Jian Sun
Weiming Hu
ViT
322
51
0
28 May 2022
Generalized Knowledge Distillation via Relationship Matching
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022
Han-Jia Ye
Su Lu
De-Chuan Zhan
FedML
203
27
0
04 May 2022
Online Continual Learning for Embedded Devices
Tyler L. Hayes
Christopher Kanan
CLL
409
68
0
21 Mar 2022
Weak Augmentation Guided Relational Self-Supervised Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022
Mingkai Zheng
Shan You
Fei Wang
Chao Qian
Changshui Zhang
Xiaogang Wang
Chang Xu
375
6
0
16 Mar 2022
SimReg: Regression as a Simple Yet Effective Tool for Self-supervised Knowledge Distillation
K. Navaneet
Soroush Abbasi Koohpayegani
Ajinkya Tejankar
Hamed Pirsiavash
186
22
0
13 Jan 2022
Constrained Mean Shift Using Distant Yet Related Neighbors for Representation Learning
K. Navaneet
Soroush Abbasi Koohpayegani
Ajinkya Tejankar
Kossar Pourahmadi
Akshayvarun Subramanya
Hamed Pirsiavash
SSL
313
8
0
08 Dec 2021
Boosting Contrastive Learning with Relation Knowledge Distillation
Kai Zheng
Yuanjiang Wang
Ye Yuan
SSL
148
14
0
08 Dec 2021
Auxiliary Learning for Self-Supervised Video Representation via Similarity-based Knowledge Distillation
Amirhossein Dadashzadeh
Alan Whone
Majid Mirmehdi
SSL
419
5
0
07 Dec 2021
A Simple Baseline for Low-Budget Active Learning
Kossar Pourahmadi
Parsa Nooralinejad
Hamed Pirsiavash
391
21
0
22 Oct 2021
Constrained Mean Shift for Representation Learning
Ajinkya Tejankar
Soroush Abbasi Koohpayegani
Hamed Pirsiavash
SSL
211
0
0
19 Oct 2021
Unsupervised Representation Learning for Binary Networks by Joint Classifier Learning
Dahyun Kim
Jonghyun Choi
SSL
MQ
425
5
0
17 Oct 2021
Towards Communication-Efficient and Privacy-Preserving Federated Representation Learning
Haizhou Shi
Youcai Zhang
Zijin Shen
Siliang Tang
Yaqian Li
Yandong Guo
Yueting Zhuang
162
8
0
29 Sep 2021
1
2
Next
Page 1 of 2