Following the Clues: Experiments on Person Re-ID using Cross-Modal Intelligence

The collection and release of street-level recordings as Open Data play a vital role in advancing autonomous driving systems and AI research. However, these datasets pose significant privacy risks, particularly for pedestrians, due to the presence of Personally Identifiable Information (PII) that extends beyond biometric traits such as faces. In this paper, we present cRID, a novel cross-modal framework combining Large Vision-Language Models, Graph Attention Networks, and representation learning to detect textual describable clues of PII and enhance person re-identification (Re-ID). Our approach focuses on identifying and leveraging interpretable features, enabling the detection of semantically meaningful PII beyond low-level appearance cues. We conduct a systematic evaluation of PII presence in person image datasets. Our experiments show improved performance in practical cross-dataset Re-ID scenarios, notably from Market-1501 to CUHK03-np (detected), highlighting the framework's practical utility. Code is available atthis https URL.
View on arXiv@article{aufschläger2025_2507.01504, title={ Following the Clues: Experiments on Person Re-ID using Cross-Modal Intelligence }, author={ Robert Aufschläger and Youssef Shoeb and Azarm Nowzad and Michael Heigl and Fabian Bally and Martin Schramm }, journal={arXiv preprint arXiv:2507.01504}, year={ 2025 } }