good4cir: Generating Detailed Synthetic Captions for Composed Image Retrieval

Composed image retrieval (CIR) enables users to search images using a reference image combined with textual modifications. Recent advances in vision-language models have improved CIR, but dataset limitations remain a barrier. Existing datasets often rely on simplistic, ambiguous, or insufficient manual annotations, hindering fine-grained retrieval. We introduce good4cir, a structured pipeline leveraging vision-language models to generate high-quality synthetic annotations. Our method involves: (1) extracting fine-grained object descriptions from query images, (2) generating comparable descriptions for target images, and (3) synthesizing textual instructions capturing meaningful transformations between images. This reduces hallucination, enhances modification diversity, and ensures object-level consistency. Applying our method improves existing datasets and enables creating new datasets across diverse domains. Results demonstrate improved retrieval accuracy for CIR models trained on our pipeline-generated datasets. We release our dataset construction framework to support further research in CIR and multi-modal retrieval.
View on arXiv@article{kolouju2025_2503.17871, title={ good4cir: Generating Detailed Synthetic Captions for Composed Image Retrieval }, author={ Pranavi Kolouju and Eric Xing and Robert Pless and Nathan Jacobs and Abby Stylianou }, journal={arXiv preprint arXiv:2503.17871}, year={ 2025 } }