ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.12885
  4. Cited By
Long-MIL: Scaling Long Contextual Multiple Instance Learning for
  Histopathology Whole Slide Image Analysis

Long-MIL: Scaling Long Contextual Multiple Instance Learning for Histopathology Whole Slide Image Analysis

21 November 2023
Honglin Li
Yunlong Zhang
Chenglu Zhu
Jiatong Cai
Sunyi Zheng
Lin Yang
    VLM
ArXivPDFHTML

Papers citing "Long-MIL: Scaling Long Contextual Multiple Instance Learning for Histopathology Whole Slide Image Analysis"

11 / 11 papers shown
Title
AEM: Attention Entropy Maximization for Multiple Instance Learning based
  Whole Slide Image Classification
AEM: Attention Entropy Maximization for Multiple Instance Learning based Whole Slide Image Classification
Yunlong Zhang
Zhongyi Shui
Yunxuan Sun
Honglin Li
Jingxiong Li
Chenglu Zhu
Lin Yang
31
0
0
18 Jun 2024
Task-specific Fine-tuning via Variational Information Bottleneck for
  Weakly-supervised Pathology Whole Slide Image Classification
Task-specific Fine-tuning via Variational Information Bottleneck for Weakly-supervised Pathology Whole Slide Image Classification
Honglin Li
Chenglu Zhu
Yunlong Zhang
Yuxuan Sun
Zhongyi Shui
Wenwei Kuang
S. Zheng
L. Yang
55
56
0
15 Mar 2023
Resurrecting Recurrent Neural Networks for Long Sequences
Resurrecting Recurrent Neural Networks for Long Sequences
Antonio Orvieto
Samuel L. Smith
Albert Gu
Anushan Fernando
Çağlar Gülçehre
Razvan Pascanu
Soham De
83
258
0
11 Mar 2023
Benchmarking the Robustness of Deep Neural Networks to Common
  Corruptions in Digital Pathology
Benchmarking the Robustness of Deep Neural Networks to Common Corruptions in Digital Pathology
Yunlong Zhang
Yuxuan Sun
Honglin Li
S. Zheng
Chenglu Zhu
L. Yang
OOD
46
26
0
30 Jun 2022
Masked Autoencoders Are Scalable Vision Learners
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
258
7,337
0
11 Nov 2021
BRACS: A Dataset for BReAst Carcinoma Subtyping in H&E Histology Images
BRACS: A Dataset for BReAst Carcinoma Subtyping in H&E Histology Images
N. Brancati
A. Anniciello
Pushpak Pati
D. Riccio
G. Scognamiglio
...
A. Foncubierta
G. Botti
M. Gabrani
Florinda Feroce
Maria Frucci
36
119
0
08 Nov 2021
SHAPE: Shifted Absolute Position Embedding for Transformers
SHAPE: Shifted Absolute Position Embedding for Transformers
Shun Kiyono
Sosuke Kobayashi
Jun Suzuki
Kentaro Inui
221
44
0
13 Sep 2021
Train Short, Test Long: Attention with Linear Biases Enables Input
  Length Extrapolation
Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation
Ofir Press
Noah A. Smith
M. Lewis
234
690
0
27 Aug 2021
Emerging Properties in Self-Supervised Vision Transformers
Emerging Properties in Self-Supervised Vision Transformers
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
283
5,723
0
29 Apr 2021
A Simple and Effective Positional Encoding for Transformers
A Simple and Effective Positional Encoding for Transformers
Pu-Chin Chen
Henry Tsai
Srinadh Bhojanapalli
Hyung Won Chung
Yin-Wen Chang
Chun-Sung Ferng
30
61
0
18 Apr 2021
Efficient Content-Based Sparse Attention with Routing Transformers
Efficient Content-Based Sparse Attention with Routing Transformers
Aurko Roy
M. Saffar
Ashish Vaswani
David Grangier
MoE
228
502
0
12 Mar 2020
1