45
0

Exploring Token-Level Augmentation in Vision Transformer for Semi-Supervised Semantic Segmentation

Abstract

Semi-supervised semantic segmentation has witnessed remarkable advancements in recent years. However, existing algorithms are based on convolutional neural networks and directly applying them to Vision Transformers poses certain limitations due to conceptual disparities. To this end, we propose TokenMix, a data augmentation technique specifically designed for semi-supervised semantic segmentation with Vision Transformers. TokenMix aligns well with the global attention mechanism by mixing images at the token level, enhancing learning capability for contextual information among image patches. We further incorporate image augmentation and feature augmentation to promote the diversity of augmentation. Moreover, to enhance consistency regularization, we propose a dual-branch framework where each branch applies image and feature augmentation to the input image. We conduct extensive experiments across multiple benchmark datasets, including Pascal VOC 2012, Cityscapes, and COCO. Results suggest that the proposed method outperforms state-of-the-art algorithms with notably observed accuracy improvement, especially under limited fine annotations.

View on arXiv
@article{zhang2025_2503.02459,
  title={ Exploring Token-Level Augmentation in Vision Transformer for Semi-Supervised Semantic Segmentation },
  author={ Dengke Zhang and Quan Tang and Fagui Liu and Haiqing Mei and C. L. Philip Chen },
  journal={arXiv preprint arXiv:2503.02459},
  year={ 2025 }
}
Comments on this paper