Preserving Product Fidelity in Large Scale Image Recontextualization with Diffusion Models
We present a framework for high-fidelity product image recontextualization using text-to-image diffusion models and a novel data augmentation pipeline. This pipeline leverages image-to-video diffusion, in/outpainting & negatives to create synthetic training data, addressing limitations of real-world data collection for this task. Our method improves the quality and diversity of generated images by disentangling product representations and enhancing the model's understanding of product characteristics. Evaluation on the ABO dataset and a private product dataset, using automated metrics and human assessment, demonstrates the effectiveness of our framework in generating realistic and compelling product visualizations, with implications for applications such as e-commerce and virtual product showcasing.
View on arXiv@article{malhi2025_2503.08729, title={ Preserving Product Fidelity in Large Scale Image Recontextualization with Diffusion Models }, author={ Ishaan Malhi and Praneet Dutta and Ellie Talius and Sally Ma and Brendan Driscoll and Krista Holden and Garima Pruthi and Arunachalam Narayanaswamy }, journal={arXiv preprint arXiv:2503.08729}, year={ 2025 } }