5
0

Continual Multiple Instance Learning with Enhanced Localization for Histopathological Whole Slide Image Analysis

Byung Hyun Lee
Wongi Jeong
Woojae Han
Kyoungbun Lee
Se Young Chun
Main:8 Pages
11 Figures
Bibliography:3 Pages
21 Tables
Appendix:10 Pages
Abstract

Multiple instance learning (MIL) significantly reduced annotation costs via bag-level weak labels for large-scale images, such as histopathological whole slide images (WSIs). However, its adaptability to continual tasks with minimal forgetting has been rarely explored, especially on instance classification for localization. Weakly incremental learning for semantic segmentation has been studied for continual localization, but it focused on natural images, leveraging global relationships among hundreds of small patches (e.g., 16×1616 \times 16) using pre-trained models. This approach seems infeasible for MIL localization due to enormous amounts (105\sim 10^5) of large patches (e.g., 256×256256 \times 256) and no available global relationships such as cancer cells. To address these challenges, we propose Continual Multiple Instance Learning with Enhanced Localization (CoMEL), an MIL framework for both localization and adaptability with minimal forgetting. CoMEL consists of (1) Grouped Double Attention Transformer (GDAT) for efficient instance encoding, (2) Bag Prototypes-based Pseudo-Labeling (BPPL) for reliable instance pseudo-labeling, and (3) Orthogonal Weighted Low-Rank Adaptation (OWLoRA) to mitigate forgetting in both bag and instance classification. Extensive experiments on three public WSI datasets demonstrate superior performance of CoMEL, outperforming the prior arts by up to 11.00%11.00\% in bag-level accuracy and up to 23.4%23.4\% in localization accuracy under the continual MIL setup.

View on arXiv
@article{lee2025_2507.02395,
  title={ Continual Multiple Instance Learning with Enhanced Localization for Histopathological Whole Slide Image Analysis },
  author={ Byung Hyun Lee and Wongi Jeong and Woojae Han and Kyoungbun Lee and Se Young Chun },
  journal={arXiv preprint arXiv:2507.02395},
  year={ 2025 }
}
Comments on this paper