Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2203.11819
Cited By
A Broad Study of Pre-training for Domain Generalization and Adaptation
22 March 2022
Donghyun Kim
Kaihong Wang
Stan Sclaroff
Kate Saenko
OOD
AI4CE
Re-assign community
ArXiv
PDF
HTML
Papers citing
"A Broad Study of Pre-training for Domain Generalization and Adaptation"
15 / 15 papers shown
Title
Rethinking Pre-Trained Feature Extractor Selection in Multiple Instance Learning for Whole Slide Image Classification
Bryan Wong
MunYong Yi
Mun Yong Yi
VLM
50
0
0
02 Aug 2024
Shared and Private Information Learning in Multimodal Sentiment Analysis with Deep Modal Alignment and Self-supervised Multi-Task Learning
Songning Lai
Jiakang Li
Guinan Guo
Xifeng Hu
Yulong Li
...
Yutong Liu
Zhaoxia Ren
Chun Wan
Danmin Miao
Zhi Liu
SSL
36
9
0
15 May 2023
The Role of Pre-training Data in Transfer Learning
R. Entezari
Mitchell Wortsman
O. Saukh
M. Shariatnia
Hanie Sedghi
Ludwig Schmidt
38
20
0
27 Feb 2023
Key Design Choices for Double-Transfer in Source-Free Unsupervised Domain Adaptation
Andrea Maracani
Raffaello Camoriano
Elisa Maiettini
Davide Talon
Lorenzo Rosasco
Lorenzo Natale
14
2
0
10 Feb 2023
Rethinking the Role of Pre-Trained Networks in Source-Free Domain Adaptation
Wenyu Zhang
Li Shen
Chuan-Sheng Foo
TTA
AI4CE
19
15
0
15 Dec 2022
You Only Need a Good Embeddings Extractor to Fix Spurious Correlations
Raghav Mehta
Vítor Albiero
Li Chen
Ivan Evtimov
Tamar Glaser
Zhiheng Li
Tal Hassner
26
17
0
12 Dec 2022
Reconciling a Centroid-Hypothesis Conflict in Source-Free Domain Adaptation
I. Diamant
Roy H. Jennings
Oranit Dror
H. Habi
Arnon Netzer
15
2
0
07 Dec 2022
Cross-domain Transfer of defect features in technical domains based on partial target data
T. Schlagenhauf
Tim Scheurenbrand
28
1
0
24 Nov 2022
Okapi: Generalising Better by Making Statistical Matches Match
Myles Bartlett
Sara Romiti
V. Sharmanska
Novi Quadrianto
26
3
0
07 Nov 2022
Domain Prompt Learning for Efficiently Adapting CLIP to Unseen Domains
X. Zhang
S. Gu
Yutaka Matsuo
Yusuke Iwasawa
VLM
30
36
0
25 Nov 2021
Are Transformers More Robust Than CNNs?
Yutong Bai
Jieru Mei
Alan Yuille
Cihang Xie
ViT
AAML
181
257
0
10 Nov 2021
Emerging Properties in Self-Supervised Vision Transformers
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
295
5,761
0
29 Apr 2021
Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts
Soravit Changpinyo
P. Sharma
Nan Ding
Radu Soricut
VLM
273
1,077
0
17 Feb 2021
Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision
Chao Jia
Yinfei Yang
Ye Xia
Yi-Ting Chen
Zarana Parekh
Hieu H. Pham
Quoc V. Le
Yun-hsuan Sung
Zhen Li
Tom Duerig
VLM
CLIP
293
3,683
0
11 Feb 2021
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Z. Tu
Kaiming He
261
10,196
0
16 Nov 2016
1