VLM-based Prompts as the Optimal Assistant for Unpaired Histopathology Virtual Staining

In histopathology, tissue sections are typically stained using common H&E staining or special stains (MAS, PAS, PASM, etc.) to clearly visualize specific tissue structures. The rapid advancement of deep learning offers an effective solution for generating virtually stained images, significantly reducing the time and labor costs associated with traditional histochemical staining. However, a new challenge arises in separating the fundamental visual characteristics of tissue sections from the visual differences induced by staining agents. Additionally, virtual staining often overlooks essential pathological knowledge and the physical properties of staining, resulting in only style-level transfer. To address these issues, we introduce, for the first time in virtual staining tasks, a pathological vision-language large model (VLM) as an auxiliary tool. We integrate contrastive learnable prompts, foundational concept anchors for tissue sections, and staining-specific concept anchors to leverage the extensive knowledge of the pathological VLM. This approach is designed to describe, frame, and enhance the direction of virtual staining. Furthermore, we have developed a data augmentation method based on the constraints of the VLM. This method utilizes the VLM's powerful image interpretation capabilities to further integrate image style and structural information, proving beneficial in high-precision pathological diagnostics. Extensive evaluations on publicly available multi-domain unpaired staining datasets demonstrate that our method can generate highly realistic images and enhance the accuracy of downstream tasks, such as glomerular detection and segmentation. Our code is available at:this https URL
View on arXiv@article{chen2025_2504.15545, title={ VLM-based Prompts as the Optimal Assistant for Unpaired Histopathology Virtual Staining }, author={ Zizhi Chen and Xinyu Zhang and Minghao Han and Yizhou Liu and Ziyun Qian and Weifeng Zhang and Xukun Zhang and Jingwei Wei and Lihua Zhang }, journal={arXiv preprint arXiv:2504.15545}, year={ 2025 } }