ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.00215
  4. Cited By
Probing the State of the Art: A Critical Look at Visual Representation
  Evaluation
v1v2 (latest)

Probing the State of the Art: A Critical Look at Visual Representation Evaluation

30 November 2019
Cinjon Resnick
Zeping Zhan
Joan Bruna
    AI4TS
ArXiv (abs)PDFHTML

Papers citing "Probing the State of the Art: A Critical Look at Visual Representation Evaluation"

11 / 11 papers shown
Title
Probing Graph Representations
Probing Graph Representations
Mohammad Sadegh Akhondzadeh
Vijay Lingam
Aleksandar Bojchevski
95
10
0
07 Mar 2023
Evaluating Representations with Readout Model Switching
Evaluating Representations with Readout Model Switching
Yazhe Li
J. Bornschein
Marcus Hutter
63
0
0
19 Feb 2023
Evaluating Self-Supervised Learning for Molecular Graph Embeddings
Evaluating Self-Supervised Learning for Molecular Graph Embeddings
Hanchen Wang
Jean Kaddour
Shengchao Liu
Jian Tang
Joan Lasenby
Qi Liu
121
23
0
16 Jun 2022
On the Origins of the Block Structure Phenomenon in Neural Network
  Representations
On the Origins of the Block Structure Phenomenon in Neural Network Representations
Thao Nguyen
M. Raghu
Simon Kornblith
88
13
0
15 Feb 2022
Socially Supervised Representation Learning: the Role of Subjectivity in
  Learning Efficient Representations
Socially Supervised Representation Learning: the Role of Subjectivity in Learning Efficient Representations
Julius Taylor
Eleni Nisioti
Clément Moulin-Frier
46
0
0
20 Sep 2021
Multiple Instance Captioning: Learning Representations from
  Histopathology Textbooks and Articles
Multiple Instance Captioning: Learning Representations from Histopathology Textbooks and Articles
Jevgenij Gamper
Nasir M. Rajpoot
69
65
0
08 Mar 2021
Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural
  Network Representations Vary with Width and Depth
Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth
Thao Nguyen
M. Raghu
Simon Kornblith
OOD
96
283
0
29 Oct 2020
Evaluating representations by the complexity of learning low-loss
  predictors
Evaluating representations by the complexity of learning low-loss predictors
William F. Whitney
M. Song
David Brandfonbrener
Jaan Altosaar
Kyunghyun Cho
83
24
0
15 Sep 2020
Self-Supervised Learning for Large-Scale Unsupervised Image Clustering
Self-Supervised Learning for Large-Scale Unsupervised Image Clustering
Evgenii Zheltonozhskii
Chaim Baskin
A. Bronstein
A. Mendelson
SSL
52
10
0
24 Aug 2020
Transfer Learning or Self-supervised Learning? A Tale of Two Pretraining
  Paradigms
Transfer Learning or Self-supervised Learning? A Tale of Two Pretraining Paradigms
Xingyi Yang
Xuehai He
Yuxiao Liang
Yue Yang
Shanghang Zhang
P. Xie
SSL
71
43
0
19 Jun 2020
How Useful is Self-Supervised Pretraining for Visual Tasks?
How Useful is Self-Supervised Pretraining for Visual Tasks?
Alejandro Newell
Jia Deng
SSL
70
139
0
31 Mar 2020
1