63
2

Instruct-ReID++: Towards Universal Purpose Instruction-Guided Person Re-identification

Abstract

Human intelligence can retrieve any person according to both visual and language descriptions. However, the current computer vision community studies specific person re-identification (ReID) tasks in different scenarios separately, which limits the applications in the real world. This paper strives to resolve this problem by proposing a novel instruct-ReID task that requires the model to retrieve images according to the given image or language instructions. Instruct-ReID is the first exploration of a general ReID setting, where existing 6 ReID tasks can be viewed as special cases by assigning different instructions. To facilitate research in this new instruct-ReID task, we propose a large-scale OmniReID++ benchmark equipped with diverse data and comprehensive evaluation methods e.g., task specific and task-free evaluation settings. In the task-specific evaluation setting, gallery sets are categorized according to specific ReID tasks. We propose a novel baseline model, IRM, with an adaptive triplet loss to handle various retrieval tasks within a unified framework. For task-free evaluation setting, where target person images are retrieved from task-agnostic gallery sets, we further propose a new method called IRM++ with novel memory bank-assisted learning. Extensive evaluations of IRM and IRM++ on OmniReID++ benchmark demonstrate the superiority of our proposed methods, achieving state-of-the-art performance on 10 test sets. The datasets, the model, and the code will be available atthis https URL

View on arXiv
@article{he2025_2405.17790,
  title={ Instruct-ReID++: Towards Universal Purpose Instruction-Guided Person Re-identification },
  author={ Weizhen He and Yiheng Deng and Yunfeng Yan and Feng Zhu and Yizhou Wang and Lei Bai and Qingsong Xie and Donglian Qi and Wanli Ouyang and Shixiang Tang },
  journal={arXiv preprint arXiv:2405.17790},
  year={ 2025 }
}
Comments on this paper