Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2211.07384
Cited By
Language models are good pathologists: using attention-based sequence reduction and text-pretrained transformers for efficient WSI classification
14 November 2022
Juan Pisula
Katarzyna Bozek
VLM
MedIm
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Language models are good pathologists: using attention-based sequence reduction and text-pretrained transformers for efficient WSI classification"
5 / 5 papers shown
Title
Local Attention Graph-based Transformer for Multi-target Genetic Alteration Prediction
Daniel Reisenbüchler
S. J. Wagner
Melanie Boxberg
T. Peng
MedIm
24
22
0
13 May 2022
Emerging Properties in Self-Supervised Vision Transformers
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
303
5,773
0
29 Apr 2021
Zero-Shot Text-to-Image Generation
Aditya A. Ramesh
Mikhail Pavlov
Gabriel Goh
Scott Gray
Chelsea Voss
Alec Radford
Mark Chen
Ilya Sutskever
VLM
253
4,774
0
24 Feb 2021
Big Bird: Transformers for Longer Sequences
Manzil Zaheer
Guru Guruganesh
Kumar Avinava Dubey
Joshua Ainslie
Chris Alberti
...
Philip Pham
Anirudh Ravula
Qifan Wang
Li Yang
Amr Ahmed
VLM
251
2,012
0
28 Jul 2020
Efficient Content-Based Sparse Attention with Routing Transformers
Aurko Roy
M. Saffar
Ashish Vaswani
David Grangier
MoE
238
579
0
12 Mar 2020
1