ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.04867
  4. Cited By
A Large-scale Study of Representation Learning with the Visual Task
  Adaptation Benchmark

A Large-scale Study of Representation Learning with the Visual Task Adaptation Benchmark

1 October 2019
Xiaohua Zhai
J. Puigcerver
A. Kolesnikov
P. Ruyssen
C. Riquelme
Mario Lucic
Josip Djolonga
André Susano Pinto
Maxim Neumann
Alexey Dosovitskiy
Lucas Beyer
Olivier Bachem
Michael Tschannen
Marcin Michalski
Olivier Bousquet
Sylvain Gelly
N. Houlsby
    SSL
ArXivPDFHTML

Papers citing "A Large-scale Study of Representation Learning with the Visual Task Adaptation Benchmark"

50 / 105 papers shown
Title
Principled and Efficient Transfer Learning of Deep Models via Neural
  Collapse
Principled and Efficient Transfer Learning of Deep Models via Neural Collapse
Xiao Li
Sheng Liu
Jin-li Zhou
Xin Lu
C. Fernandez‐Granda
Zhihui Zhu
Q. Qu
AAML
19
18
0
23 Dec 2022
3D Point Cloud Pre-training with Knowledge Distillation from 2D Images
3D Point Cloud Pre-training with Knowledge Distillation from 2D Images
Yuan Yao
Yuanhan Zhang
Zhen-fei Yin
Jiebo Luo
Wanli Ouyang
Xiaoshui Huang
3DPC
27
10
0
17 Dec 2022
Pre-trained Encoders in Self-Supervised Learning Improve Secure and
  Privacy-preserving Supervised Learning
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning
Hongbin Liu
Wenjie Qu
Jinyuan Jia
Neil Zhenqiang Gong
SSL
28
6
0
06 Dec 2022
Finetune like you pretrain: Improved finetuning of zero-shot vision
  models
Finetune like you pretrain: Improved finetuning of zero-shot vision models
Sachin Goyal
Ananya Kumar
Sankalp Garg
Zico Kolter
Aditi Raghunathan
CLIP
VLM
27
136
0
01 Dec 2022
Self-supervised remote sensing feature learning: Learning Paradigms,
  Challenges, and Future Works
Self-supervised remote sensing feature learning: Learning Paradigms, Challenges, and Future Works
Chao Tao
Ji Qi
Mingning Guo
Qing Zhu
Haifeng Li
SSL
21
56
0
15 Nov 2022
SUPERB @ SLT 2022: Challenge on Generalization and Efficiency of
  Self-Supervised Speech Representation Learning
SUPERB @ SLT 2022: Challenge on Generalization and Efficiency of Self-Supervised Speech Representation Learning
Tzu-hsun Feng
Annie Dong
Ching-Feng Yeh
Shu-Wen Yang
Tzu-Quan Lin
...
Xuankai Chang
Shinji Watanabe
Abdel-rahman Mohamed
Shang-Wen Li
Hung-yi Lee
ELM
SSL
24
33
0
16 Oct 2022
CLIP2Point: Transfer CLIP to Point Cloud Classification with Image-Depth
  Pre-training
CLIP2Point: Transfer CLIP to Point Cloud Classification with Image-Depth Pre-training
Tianyu Huang
Bowen Dong
Yunhan Yang
Xiaoshui Huang
Rynson W. H. Lau
Wanli Ouyang
W. Zuo
VLM
3DPC
CLIP
36
144
0
03 Oct 2022
Visual Prompt Tuning for Generative Transfer Learning
Visual Prompt Tuning for Generative Transfer Learning
Kihyuk Sohn
Yuan Hao
José Lezama
Luisa F. Polanía
Huiwen Chang
Han Zhang
Irfan Essa
Lu Jiang
VPVLM
VLM
53
81
0
03 Oct 2022
Making Intelligence: Ethical Values in IQ and ML Benchmarks
Making Intelligence: Ethical Values in IQ and ML Benchmarks
Borhane Blili-Hamelin
Leif Hancox-Li
27
16
0
01 Sep 2022
Assaying Out-Of-Distribution Generalization in Transfer Learning
Assaying Out-Of-Distribution Generalization in Transfer Learning
F. Wenzel
Andrea Dittadi
Peter V. Gehler
Carl-Johann Simon-Gabriel
Max Horn
...
Chris Russell
Thomas Brox
Bernt Schiele
Bernhard Schölkopf
Francesco Locatello
OOD
OODD
AAML
49
71
0
19 Jul 2022
FiT: Parameter Efficient Few-shot Transfer Learning for Personalized and
  Federated Image Classification
FiT: Parameter Efficient Few-shot Transfer Learning for Personalized and Federated Image Classification
Aliaksandra Shysheya
J. Bronskill
Massimiliano Patacchiola
Sebastian Nowozin
Richard E. Turner
3DH
FedML
38
27
0
17 Jun 2022
Neural Prompt Search
Neural Prompt Search
Yuanhan Zhang
Kaiyang Zhou
Ziwei Liu
VPVLM
VLM
34
144
0
09 Jun 2022
Hub-Pathway: Transfer Learning from A Hub of Pre-trained Models
Hub-Pathway: Transfer Learning from A Hub of Pre-trained Models
Yang Shu
Zhangjie Cao
Ziyang Zhang
Jianmin Wang
Mingsheng Long
15
4
0
08 Jun 2022
How to Fine-tune Models with Few Samples: Update, Data Augmentation, and
  Test-time Augmentation
How to Fine-tune Models with Few Samples: Update, Data Augmentation, and Test-time Augmentation
Yujin Kim
Jaehoon Oh
Sungnyun Kim
Se-Young Yun
29
6
0
13 May 2022
ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented
  Visual Models
ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual Models
Chunyuan Li
Haotian Liu
Liunian Harold Li
Pengchuan Zhang
J. Aneja
...
Ping Jin
Houdong Hu
Zicheng Liu
Yong Jae Lee
Jianfeng Gao
26
144
0
19 Apr 2022
Empirical Evaluation and Theoretical Analysis for Representation
  Learning: A Survey
Empirical Evaluation and Theoretical Analysis for Representation Learning: A Survey
Kento Nozawa
Issei Sato
AI4TS
16
4
0
18 Apr 2022
Last Layer Re-Training is Sufficient for Robustness to Spurious
  Correlations
Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations
Polina Kirichenko
Pavel Izmailov
A. Wilson
OOD
34
314
0
06 Apr 2022
Bamboo: Building Mega-Scale Vision Dataset Continually with
  Human-Machine Synergy
Bamboo: Building Mega-Scale Vision Dataset Continually with Human-Machine Synergy
Yuanhan Zhang
Qi Sun
Yichun Zhou
Zexin He
Zhen-fei Yin
Kunze Wang
Lu Sheng
Yu Qiao
Jing Shao
Ziwei Liu
ObjD
VLM
19
19
0
15 Mar 2022
PACTran: PAC-Bayesian Metrics for Estimating the Transferability of
  Pretrained Models to Classification Tasks
PACTran: PAC-Bayesian Metrics for Estimating the Transferability of Pretrained Models to Classification Tasks
Nan Ding
Xi Chen
Tomer Levinboim
Soravit Changpinyo
Radu Soricut
22
26
0
10 Mar 2022
Vision Models Are More Robust And Fair When Pretrained On Uncurated
  Images Without Supervision
Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision
Priya Goyal
Quentin Duval
Isaac Seessel
Mathilde Caron
Ishan Misra
Levent Sagun
Armand Joulin
Piotr Bojanowski
VLM
SSL
26
110
0
16 Feb 2022
SLIP: Self-supervision meets Language-Image Pre-training
SLIP: Self-supervision meets Language-Image Pre-training
Norman Mu
Alexander Kirillov
David A. Wagner
Saining Xie
VLM
CLIP
42
476
0
23 Dec 2021
Tradeoffs Between Contrastive and Supervised Learning: An Empirical
  Study
Tradeoffs Between Contrastive and Supervised Learning: An Empirical Study
A. Karthik
Mike Wu
Noah D. Goodman
Alex Tamkin
SSL
26
5
0
10 Dec 2021
PolyViT: Co-training Vision Transformers on Images, Videos and Audio
PolyViT: Co-training Vision Transformers on Images, Videos and Audio
Valerii Likhosherstov
Anurag Arnab
K. Choromanski
Mario Lucic
Yi Tay
Adrian Weller
Mostafa Dehghani
ViT
33
73
0
25 Nov 2021
Graphs as Tools to Improve Deep Learning Methods
Graphs as Tools to Improve Deep Learning Methods
Carlos Lassance
Myriam Bontonou
Mounia Hamidouche
Bastien Pasdeloup
Lucas Drumetz
Vincent Gripon
GNN
AI4CE
AAML
39
0
0
08 Oct 2021
SERAB: A multi-lingual benchmark for speech emotion recognition
SERAB: A multi-lingual benchmark for speech emotion recognition
Neil Scheidwasser
M. Kegler
P. Beckmann
Milos Cernak
24
44
0
07 Oct 2021
Exploring the Limits of Large Scale Pre-training
Exploring the Limits of Large Scale Pre-training
Samira Abnar
Mostafa Dehghani
Behnam Neyshabur
Hanie Sedghi
AI4CE
55
114
0
05 Oct 2021
Billion-Scale Pretraining with Vision Transformers for Multi-Task Visual
  Representations
Billion-Scale Pretraining with Vision Transformers for Multi-Task Visual Representations
Josh Beal
Hao Wu
Dong Huk Park
Andrew Zhai
Dmitry Kislyuk
ViT
13
29
0
12 Aug 2021
The Benchmark Lottery
The Benchmark Lottery
Mostafa Dehghani
Yi Tay
A. Gritsenko
Zhe Zhao
N. Houlsby
Fernando Diaz
Donald Metzler
Oriol Vinyals
34
89
0
14 Jul 2021
Do sound event representations generalize to other audio tasks? A case
  study in audio transfer learning
Do sound event representations generalize to other audio tasks? A case study in audio transfer learning
Anurag Kumar
Yun Wang
V. Ithapu
Christian Fuegen
14
3
0
21 Jun 2021
How to train your ViT? Data, Augmentation, and Regularization in Vision
  Transformers
How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers
Andreas Steiner
Alexander Kolesnikov
Xiaohua Zhai
Ross Wightman
Jakob Uszkoreit
Lucas Beyer
ViT
34
613
0
18 Jun 2021
Learning to See by Looking at Noise
Learning to See by Looking at Noise
Manel Baradad
Jonas Wulff
Tongzhou Wang
Phillip Isola
Antonio Torralba
18
89
0
10 Jun 2021
Scaling Vision with Sparse Mixture of Experts
Scaling Vision with Sparse Mixture of Experts
C. Riquelme
J. Puigcerver
Basil Mustafa
Maxim Neumann
Rodolphe Jenatton
André Susano Pinto
Daniel Keysers
N. Houlsby
MoE
12
575
0
10 Jun 2021
Correlated Input-Dependent Label Noise in Large-Scale Image
  Classification
Correlated Input-Dependent Label Noise in Large-Scale Image Classification
Mark Collier
Basil Mustafa
Efi Kokiopoulou
Rodolphe Jenatton
Jesse Berent
NoLa
176
53
0
19 May 2021
MLP-Mixer: An all-MLP Architecture for Vision
MLP-Mixer: An all-MLP Architecture for Vision
Ilya O. Tolstikhin
N. Houlsby
Alexander Kolesnikov
Lucas Beyer
Xiaohua Zhai
...
Andreas Steiner
Daniel Keysers
Jakob Uszkoreit
Mario Lucic
Alexey Dosovitskiy
259
2,603
0
04 May 2021
Comparing Transfer and Meta Learning Approaches on a Unified Few-Shot
  Classification Benchmark
Comparing Transfer and Meta Learning Approaches on a Unified Few-Shot Classification Benchmark
Vincent Dumoulin
N. Houlsby
Utku Evci
Xiaohua Zhai
Ross Goroshin
Sylvain Gelly
Hugo Larochelle
19
26
0
06 Apr 2021
Factors of Influence for Transfer Learning across Diverse Appearance
  Domains and Task Types
Factors of Influence for Transfer Learning across Diverse Appearance Domains and Task Types
Thomas Mensink
J. Uijlings
Alina Kuznetsova
Michael Gygli
V. Ferrari
VLM
32
80
0
24 Mar 2021
Self-Supervised Pretraining Improves Self-Supervised Pretraining
Self-Supervised Pretraining Improves Self-Supervised Pretraining
Colorado Reed
Xiangyu Yue
Aniruddha Nrusimha
Sayna Ebrahimi
Vivek Vijaykumar
...
Shanghang Zhang
Devin Guillory
Sean L. Metzger
Kurt Keutzer
Trevor Darrell
25
105
0
23 Mar 2021
Learning Transferable Visual Models From Natural Language Supervision
Learning Transferable Visual Models From Natural Language Supervision
Alec Radford
Jong Wook Kim
Chris Hallacy
Aditya A. Ramesh
Gabriel Goh
...
Amanda Askell
Pamela Mishkin
Jack Clark
Gretchen Krueger
Ilya Sutskever
CLIP
VLM
98
27,632
0
26 Feb 2021
LogME: Practical Assessment of Pre-trained Models for Transfer Learning
LogME: Practical Assessment of Pre-trained Models for Transfer Learning
Kaichao You
Yong Liu
Jianmin Wang
Mingsheng Long
16
178
0
22 Feb 2021
Concept Generalization in Visual Representation Learning
Concept Generalization in Visual Representation Learning
Mert Bulent Sariyildiz
Yannis Kalantidis
Diane Larlus
Alahari Karteek
SSL
21
50
0
10 Dec 2020
How Well Do Self-Supervised Models Transfer?
How Well Do Self-Supervised Models Transfer?
Linus Ericsson
H. Gouk
Timothy M. Hospedales
SSL
30
274
0
26 Nov 2020
An Image is Worth 16x16 Words: Transformers for Image Recognition at
  Scale
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Alexey Dosovitskiy
Lucas Beyer
Alexander Kolesnikov
Dirk Weissenborn
Xiaohua Zhai
...
Matthias Minderer
G. Heigold
Sylvain Gelly
Jakob Uszkoreit
N. Houlsby
ViT
41
39,217
0
22 Oct 2020
Which Model to Transfer? Finding the Needle in the Growing Haystack
Which Model to Transfer? Finding the Needle in the Growing Haystack
Cédric Renggli
André Susano Pinto
Luka Rimanic
J. Puigcerver
C. Riquelme
Ce Zhang
Mario Lucic
21
23
0
13 Oct 2020
Scalable Transfer Learning with Expert Models
Scalable Transfer Learning with Expert Models
J. Puigcerver
C. Riquelme
Basil Mustafa
Cédric Renggli
André Susano Pinto
Sylvain Gelly
Daniel Keysers
N. Houlsby
27
62
0
28 Sep 2020
On Robustness and Transferability of Convolutional Neural Networks
On Robustness and Transferability of Convolutional Neural Networks
Josip Djolonga
Jessica Yung
Michael Tschannen
Rob Romijnders
Lucas Beyer
...
D. Moldovan
Sylvain Gelly
N. Houlsby
Xiaohua Zhai
Mario Lucic
OOD
8
153
0
16 Jul 2020
Rescaling Egocentric Vision
Rescaling Egocentric Vision
Dima Damen
Hazel Doughty
G. Farinella
Antonino Furnari
Evangelos Kazakos
...
Davide Moltisanti
Jonathan Munro
Toby Perrett
Will Price
Michael Wray
EgoV
14
435
0
23 Jun 2020
Bootstrap your own latent: A new approach to self-supervised Learning
Bootstrap your own latent: A new approach to self-supervised Learning
Jean-Bastien Grill
Florian Strub
Florent Altché
Corentin Tallec
Pierre Harvey Richemond
...
M. G. Azar
Bilal Piot
Koray Kavukcuoglu
Rémi Munos
Michal Valko
SSL
49
6,643
0
13 Jun 2020
How Useful is Self-Supervised Pretraining for Visual Tasks?
How Useful is Self-Supervised Pretraining for Visual Tasks?
Alejandro Newell
Jia Deng
SSL
17
136
0
31 Mar 2020
A Survey of Deep Learning for Scientific Discovery
A Survey of Deep Learning for Scientific Discovery
M. Raghu
Erica Schmidt
OOD
AI4CE
35
120
0
26 Mar 2020
Automatic Shortcut Removal for Self-Supervised Representation Learning
Automatic Shortcut Removal for Self-Supervised Representation Learning
Matthias Minderer
Olivier Bachem
N. Houlsby
Michael Tschannen
SSL
8
73
0
20 Feb 2020
Previous
123
Next