Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2307.04596
Cited By
Distill-SODA: Distilling Self-Supervised Vision Transformer for Source-Free Open-Set Domain Adaptation in Computational Pathology
10 July 2023
Guillaume Vray
Devavrat Tomar
Jean-Philippe Thiran
Behzad Bozorgtabar
MedIm
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Distill-SODA: Distilling Self-Supervised Vision Transformer for Source-Free Open-Set Domain Adaptation in Computational Pathology"
5 / 5 papers shown
Title
Upcycling Models under Domain and Category Shift
Sanqing Qu
Tianpei Zou
Florian Roehrbein
Cewu Lu
Guang-Sheng Chen
Dacheng Tao
Changjun Jiang
33
39
0
13 Mar 2023
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
258
7,337
0
11 Nov 2021
Open-Set Recognition: a Good Closed-Set Classifier is All You Need?
S. Vaze
Kai Han
Andrea Vedaldi
Andrew Zisserman
BDL
158
401
0
12 Oct 2021
Intriguing Properties of Vision Transformers
Muzammal Naseer
Kanchana Ranasinghe
Salman Khan
Munawar Hayat
F. Khan
Ming-Hsuan Yang
ViT
248
618
0
21 May 2021
Emerging Properties in Self-Supervised Vision Transformers
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
283
5,723
0
29 Apr 2021
1