A 2D Semantic-Aware Position Encoding for Vision Transformers

Vision transformers have demonstrated significant advantages in computer vision tasks due to their ability to capture long-range dependencies and contextual relationships through self-attention. However, existing position encoding techniques, which are largely borrowed from natural language processing, fail to effectively capture semantic-aware positional relationships between image patches. Traditional approaches like absolute position encoding and relative position encoding primarily focus on 1D linear position relationship, often neglecting the semantic similarity between distant yet contextually related patches. These limitations hinder model generalization, translation equivariance, and the ability to effectively handle repetitive or structured patterns in images. In this paper, we propose 2-Dimensional Semantic-Aware Position Encoding (), a novel position encoding method with semantic awareness that dynamically adapts position representations by leveraging local content instead of fixed linear position relationship or spatial coordinates. Our method enhances the model's ability to generalize across varying image resolutions and scales, improves translation equivariance, and better aggregates features for visually similar but spatially distant patches. By integrating into vision transformers, we bridge the gap between position encoding and perceptual similarity, thereby improving performance on computer vision tasks.
View on arXiv@article{chen2025_2505.09466, title={ A 2D Semantic-Aware Position Encoding for Vision Transformers }, author={ Xi Chen and Shiyang Zhou and Muqi Huang and Jiaxu Feng and Yun Xiong and Kun Zhou and Biao Yang and Yuhui Zhang and Huishuai Bao and Sijia Peng and Chuan Li and Feng Shi }, journal={arXiv preprint arXiv:2505.09466}, year={ 2025 } }