ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.08054
  4. Cited By
Divide and Contrast: Self-supervised Learning from Uncurated Data

Divide and Contrast: Self-supervised Learning from Uncurated Data

17 May 2021
Yonglong Tian
Olivier J. Hénaff
Aaron van den Oord
    SSL
ArXivPDFHTML

Papers citing "Divide and Contrast: Self-supervised Learning from Uncurated Data"

26 / 26 papers shown
Title
Efficient Self-Supervised Learning for Earth Observation via Dynamic Dataset Curation
Efficient Self-Supervised Learning for Earth Observation via Dynamic Dataset Curation
Thomas Kerdreux
A. Tuel
Quentin Febvre
A. Mouche
Bertrand Chapron
73
0
0
09 Apr 2025
ELIP: Enhanced Visual-Language Foundation Models for Image Retrieval
ELIP: Enhanced Visual-Language Foundation Models for Image Retrieval
Guanqi Zhan
Yuanpei Liu
Kai Han
Weidi Xie
Andrew Zisserman
VLM
102
0
0
21 Feb 2025
Self-Masking Networks for Unsupervised Adaptation
Self-Masking Networks for Unsupervised Adaptation
Alfonso Taboada Warmerdam
Mathilde Caron
Yuki M. Asano
34
1
0
11 Sep 2024
Predicting the Best of N Visual Trackers
Predicting the Best of N Visual Trackers
B. Alawode
S. Javed
Arif Mahmood
Jirí Matas
39
1
0
22 Jul 2024
RudolfV: A Foundation Model by Pathologists for Pathologists
RudolfV: A Foundation Model by Pathologists for Pathologists
Jonas Dippel
Barbara Feulner
Tobias Winterhoff
Timo Milbich
Stephan Tietz
...
David Horst
Lukas Ruff
Klaus-Robert Muller
Frederick Klauschen
Maximilian Alber
23
28
0
08 Jan 2024
DINOv2: Learning Robust Visual Features without Supervision
DINOv2: Learning Robust Visual Features without Supervision
Maxime Oquab
Timothée Darcet
Théo Moutakanni
Huy Q. Vo
Marc Szafraniec
...
Hervé Jégou
Julien Mairal
Patrick Labatut
Armand Joulin
Piotr Bojanowski
VLM
CLIP
SSL
44
3,011
0
14 Apr 2023
A surprisingly simple technique to control the pretraining bias for
  better transfer: Expand or Narrow your representation
A surprisingly simple technique to control the pretraining bias for better transfer: Expand or Narrow your representation
Florian Bordes
Samuel Lavoie
Randall Balestriero
Nicolas Ballas
Pascal Vincent
SSL
19
5
0
11 Apr 2023
A simple, efficient and scalable contrastive masked autoencoder for
  learning visual representations
A simple, efficient and scalable contrastive masked autoencoder for learning visual representations
Shlok Kumar Mishra
Joshua Robinson
Huiwen Chang
David Jacobs
Aaron Sarna
Aaron Maschinot
Dilip Krishnan
DiffM
23
29
0
30 Oct 2022
Granularity-aware Adaptation for Image Retrieval over Multiple Tasks
Granularity-aware Adaptation for Image Retrieval over Multiple Tasks
Jon Almazán
ByungSoo Ko
Geonmo Gu
Diane Larlus
Yannis Kalantidis
ObjD
VLM
28
7
0
05 Oct 2022
Where Should I Spend My FLOPS? Efficiency Evaluations of Visual
  Pre-training Methods
Where Should I Spend My FLOPS? Efficiency Evaluations of Visual Pre-training Methods
Skanda Koppula
Yazhe Li
Evan Shelhamer
Andrew Jaegle
Nikhil Parthasarathy
Relja Arandjelović
João Carreira
Olivier J. Hénaff
23
9
0
30 Sep 2022
On the Pros and Cons of Momentum Encoder in Self-Supervised Visual
  Representation Learning
On the Pros and Cons of Momentum Encoder in Self-Supervised Visual Representation Learning
T. Pham
Chaoning Zhang
Axi Niu
Kang Zhang
Chang-Dong Yoo
18
11
0
11 Aug 2022
OpenCon: Open-world Contrastive Learning
OpenCon: Open-world Contrastive Learning
Yiyou Sun
Yixuan Li
VLM
SSL
DRL
37
39
0
04 Aug 2022
Prioritized Training on Points that are Learnable, Worth Learning, and
  Not Yet Learnt
Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt
Sören Mindermann
J. Brauner
Muhammed Razzak
Mrinank Sharma
Andreas Kirsch
...
Benedikt Höltgen
Aidan N. Gomez
Adrien Morisot
Sebastian Farquhar
Y. Gal
30
148
0
14 Jun 2022
CYBORGS: Contrastively Bootstrapping Object Representations by Grounding
  in Segmentation
CYBORGS: Contrastively Bootstrapping Object Representations by Grounding in Segmentation
Renhao Wang
Hang Zhao
Yang Gao
SSL
14
1
0
17 Mar 2022
Object discovery and representation networks
Object discovery and representation networks
Olivier J. Hénaff
Skanda Koppula
Evan Shelhamer
Daniel Zoran
Andrew Jaegle
Andrew Zisserman
João Carreira
Relja Arandjelović
33
87
0
16 Mar 2022
Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning
Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning
Hao He
Kaiwen Zha
Dina Katabi
AAML
26
32
0
22 Feb 2022
Pushing the limits of self-supervised ResNets: Can we outperform
  supervised learning without labels on ImageNet?
Pushing the limits of self-supervised ResNets: Can we outperform supervised learning without labels on ImageNet?
Nenad Tomašev
Ioana Bica
Brian McWilliams
Lars Buesing
Razvan Pascanu
Charles Blundell
Jovana Mitrović
SSL
66
80
0
13 Jan 2022
SLIP: Self-supervision meets Language-Image Pre-training
SLIP: Self-supervision meets Language-Image Pre-training
Norman Mu
Alexander Kirillov
David A. Wagner
Saining Xie
VLM
CLIP
8
475
0
23 Dec 2021
Self-Supervised Models are Continual Learners
Self-Supervised Models are Continual Learners
Enrico Fini
Victor G. Turrisi da Costa
Xavier Alameda-Pineda
Elisa Ricci
Alahari Karteek
Julien Mairal
BDL
CLL
SSL
19
157
0
08 Dec 2021
A data-centric approach for improving ambiguous labels with combined
  semi-supervised classification and clustering
A data-centric approach for improving ambiguous labels with combined semi-supervised classification and clustering
Lars Schmarje
M. Santarossa
Simon-Martin Schroder
Claudius Zelenka
R. Kiko
J. Stracke
N. Volkmann
Reinhard Koch
22
10
0
30 Jun 2021
Efficient Self-supervised Vision Transformers for Representation
  Learning
Efficient Self-supervised Vision Transformers for Representation Learning
Chunyuan Li
Jianwei Yang
Pengchuan Zhang
Mei Gao
Bin Xiao
Xiyang Dai
Lu Yuan
Jianfeng Gao
ViT
28
208
0
17 Jun 2021
Unsupervised Semantic Segmentation by Contrasting Object Mask Proposals
Unsupervised Semantic Segmentation by Contrasting Object Mask Proposals
Wouter Van Gansbeke
Simon Vandenhende
Stamatios Georgoulis
Luc Van Gool
SSL
188
250
0
11 Feb 2021
PointContrast: Unsupervised Pre-training for 3D Point Cloud
  Understanding
PointContrast: Unsupervised Pre-training for 3D Point Cloud Understanding
Saining Xie
Jiatao Gu
Demi Guo
C. Qi
Leonidas J. Guibas
Or Litany
3DPC
139
620
0
21 Jul 2020
Improved Baselines with Momentum Contrastive Learning
Improved Baselines with Momentum Contrastive Learning
Xinlei Chen
Haoqi Fan
Ross B. Girshick
Kaiming He
SSL
238
3,359
0
09 Mar 2020
A Mutual Information Maximization Perspective of Language Representation
  Learning
A Mutual Information Maximization Perspective of Language Representation Learning
Lingpeng Kong
Cyprien de Masson dÁutume
Wang Ling
Lei Yu
Zihang Dai
Dani Yogatama
SSL
212
165
0
18 Oct 2019
Mean teachers are better role models: Weight-averaged consistency
  targets improve semi-supervised deep learning results
Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results
Antti Tarvainen
Harri Valpola
OOD
MoMe
244
1,276
0
06 Mar 2017
1