ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2208.14439
  4. Cited By
Self-Supervised Pyramid Representation Learning for Multi-Label Visual
  Analysis and Beyond

Self-Supervised Pyramid Representation Learning for Multi-Label Visual Analysis and Beyond

30 August 2022
Cheng-Yen Hsieh
Chih-Jung Chang
Fu-En Yang
Yu-Chiang Frank Wang
    SSL
ArXivPDFHTML

Papers citing "Self-Supervised Pyramid Representation Learning for Multi-Label Visual Analysis and Beyond"

7 / 7 papers shown
Title
Compressive Visual Representations
Compressive Visual Representations
Kuang-Huei Lee
Anurag Arnab
S. Guadarrama
John F. Canny
Ian S. Fischer
SSL
62
48
0
27 Sep 2021
With a Little Help from My Friends: Nearest-Neighbor Contrastive
  Learning of Visual Representations
With a Little Help from My Friends: Nearest-Neighbor Contrastive Learning of Visual Representations
Debidatta Dwibedi
Y. Aytar
Jonathan Tompson
P. Sermanet
Andrew Zisserman
SSL
188
452
0
29 Apr 2021
Emerging Properties in Self-Supervised Vision Transformers
Emerging Properties in Self-Supervised Vision Transformers
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
314
5,775
0
29 Apr 2021
ImageNet-21K Pretraining for the Masses
ImageNet-21K Pretraining for the Masses
T. Ridnik
Emanuel Ben-Baruch
Asaf Noy
Lihi Zelnik-Manor
SSeg
VLM
CLIP
176
686
0
22 Apr 2021
Instance Localization for Self-supervised Detection Pretraining
Instance Localization for Self-supervised Detection Pretraining
Ceyuan Yang
Zhirong Wu
Bolei Zhou
Stephen Lin
ViT
SSL
100
145
0
16 Feb 2021
Improved Baselines with Momentum Contrastive Learning
Improved Baselines with Momentum Contrastive Learning
Xinlei Chen
Haoqi Fan
Ross B. Girshick
Kaiming He
SSL
267
3,369
0
09 Mar 2020
A disciplined approach to neural network hyper-parameters: Part 1 --
  learning rate, batch size, momentum, and weight decay
A disciplined approach to neural network hyper-parameters: Part 1 -- learning rate, batch size, momentum, and weight decay
L. Smith
202
1,019
0
26 Mar 2018
1