29
0

ShapeSpeak: Body Shape-Aware Textual Alignment for Visible-Infrared Person Re-Identification

Abstract

Visible-Infrared Person Re-identification (VIReID) aims to match visible and infrared pedestrian images, but the modality differences and the complexity of identity features make it challenging. Existing methods rely solely on identity label supervision, which makes it difficult to fully extract high-level semantic information. Recently, vision-language pre-trained models have been introduced to VIReID, enhancing semantic information modeling by generating textual descriptions. However, such methods do not explicitly model body shape features, which are crucial for cross-modal matching. To address this, we propose an effective Body Shape-aware Textual Alignment (BSaTa) framework that explicitly models and utilizes body shape information to improve VIReID performance. Specifically, we design a Body Shape Textual Alignment (BSTA) module that extracts body shape information using a human parsing model and converts it into structured text representations via CLIP. We also design a Text-Visual Consistency Regularizer (TVCR) to ensure alignment between body shape textual representations and visual body shape features. Furthermore, we introduce a Shape-aware Representation Learning (SRL) mechanism that combines Multi-text Supervision and Distribution Consistency Constraints to guide the visual encoder to learn modality-invariant and discriminative identity features, thus enhancing modality invariance. Experimental results demonstrate that our method achieves superior performance on the SYSU-MM01 and RegDB datasets, validating its effectiveness.

View on arXiv
@article{yan2025_2504.18025,
  title={ ShapeSpeak: Body Shape-Aware Textual Alignment for Visible-Infrared Person Re-Identification },
  author={ Shuanglin Yan and Neng Dong and Shuang Li and Rui Yan and Hao Tang and Jing Qin },
  journal={arXiv preprint arXiv:2504.18025},
  year={ 2025 }
}
Comments on this paper