ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.14762
  4. Cited By
On the Efficacy of Small Self-Supervised Contrastive Models without
  Distillation Signals

On the Efficacy of Small Self-Supervised Contrastive Models without Distillation Signals

30 July 2021
Haizhou Shi
Youcai Zhang
Siliang Tang
Wenjie Zhu
Yaqian Li
Yandong Guo
Yueting Zhuang
    SyDa
ArXivPDFHTML

Papers citing "On the Efficacy of Small Self-Supervised Contrastive Models without Distillation Signals"

13 / 13 papers shown
Title
Lightweight Model Pre-training via Language Guided Knowledge
  Distillation
Lightweight Model Pre-training via Language Guided Knowledge Distillation
Mingsheng Li
Lin Zhang
Mingzhen Zhu
Zilong Huang
Gang Yu
Jiayuan Fan
Tao Chen
36
0
0
17 Jun 2024
Relational Self-supervised Distillation with Compact Descriptors for
  Image Copy Detection
Relational Self-supervised Distillation with Compact Descriptors for Image Copy Detection
Juntae Kim
Sungwon Woo
Jongho Nang
24
1
0
28 May 2024
On Improving the Algorithm-, Model-, and Data- Efficiency of
  Self-Supervised Learning
On Improving the Algorithm-, Model-, and Data- Efficiency of Self-Supervised Learning
Yunhao Cao
Jianxin Wu
25
0
0
30 Apr 2024
CORN: Contact-based Object Representation for Nonprehensile Manipulation
  of General Unseen Objects
CORN: Contact-based Object Representation for Nonprehensile Manipulation of General Unseen Objects
Yoonyoung Cho
Junhyek Han
Yoontae Cho
Beomjoon Kim
34
8
0
16 Mar 2024
A Unified Approach to Domain Incremental Learning with Memory: Theory
  and Algorithm
A Unified Approach to Domain Incremental Learning with Memory: Theory and Algorithm
Haizhou Shi
Hao Wang
CLL
27
18
0
18 Oct 2023
A Simple Recipe for Competitive Low-compute Self supervised Vision
  Models
A Simple Recipe for Competitive Low-compute Self supervised Vision Models
Quentin Duval
Ishan Misra
Nicolas Ballas
20
9
0
23 Jan 2023
Establishing a stronger baseline for lightweight contrastive models
Establishing a stronger baseline for lightweight contrastive models
Wenye Lin
Yifeng Ding
Zhixiong Cao
Haitao Zheng
19
2
0
14 Dec 2022
Effective Self-supervised Pre-training on Low-compute Networks without
  Distillation
Effective Self-supervised Pre-training on Low-compute Networks without Distillation
Fuwen Tan
F. Saleh
Brais Martínez
27
4
0
06 Oct 2022
Improving Label-Deficient Keyword Spotting Through Self-Supervised
  Pretraining
Improving Label-Deficient Keyword Spotting Through Self-Supervised Pretraining
H. S. Bovbjerg
Z. Tan
VLM
27
3
0
04 Oct 2022
Slimmable Networks for Contrastive Self-supervised Learning
Slimmable Networks for Contrastive Self-supervised Learning
Shuai Zhao
Xiaohan Wang
Linchao Zhu
Yi Yang
19
1
0
30 Sep 2022
SEED: Self-supervised Distillation For Visual Representation
SEED: Self-supervised Distillation For Visual Representation
Zhiyuan Fang
Jianfeng Wang
Lijuan Wang
Lei Zhang
Yezhou Yang
Zicheng Liu
SSL
231
190
0
12 Jan 2021
Improved Baselines with Momentum Contrastive Learning
Improved Baselines with Momentum Contrastive Learning
Xinlei Chen
Haoqi Fan
Ross B. Girshick
Kaiming He
SSL
238
3,367
0
09 Mar 2020
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision
  Applications
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
948
20,549
0
17 Apr 2017
1