Cloth-changing person re-identification aims at recognizing the same person with clothing changes across non-overlapping cameras. Advanced methods either resort to identity-related auxiliary modalities (e.g., sketches, silhouettes, and keypoints) or clothing labels to mitigate the impact of clothes. However, relying on unpractical and inflexible auxiliary modalities or annotations limits their real-world applicability. In this paper, we promote cloth-changing person re-identification by leveraging abundant semantics present within pedestrian images, without the need for any auxiliaries. Specifically, we first propose a unified Semantics Mining and Refinement (SMR) module to extract robust identity-related content and salient semantics, mitigating interference from clothing appearances effectively. We further propose the Content and Salient Semantics Collaboration (CSSC) framework to collaborate and leverage various semantics, facilitating cross-parallel semantic interaction and refinement. Our proposed method achieves state-of-the-art performance on three cloth-changing benchmarks, demonstrating its superiority over advanced competitors. The code is available atthis https URL.
View on arXiv@article{wang2025_2405.16597, title={ Content and Salient Semantics Collaboration for Cloth-Changing Person Re-Identification }, author={ Qizao Wang and Xuelin Qian and Bin Li and Lifeng Chen and Yanwei Fu and Xiangyang Xue }, journal={arXiv preprint arXiv:2405.16597}, year={ 2025 } }