ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.10232
37
0

ColorEdit: Training-free Image-Guided Color editing with diffusion model

15 November 2024
Xingxi Yin
Zhi Li
Jingfeng Zhang
Chenglin Li
Yin Zhang
    DiffM
ArXivPDFHTML
Abstract

Text-to-image (T2I) diffusion models, with their impressive generative capabilities, have been adopted for image editing tasks, demonstrating remarkable efficacy. However, due to attention leakage and collision between the cross-attention map of the object and the new color attribute from the text prompt, text-guided image editing methods may fail to change the color of an object, resulting in a misalignment between the resulting image and the text prompt. In this paper, we conduct an in-depth analysis on the process of text-guided image synthesizing and what semantic information different cross-attention blocks have learned. We observe that the visual representation of an object is determined in the up-block of the diffusion model in the early stage of the denoising process, and color adjustment can be achieved through value matrices alignment in the cross-attention layer. Based on our findings, we propose a straightforward, yet stable, and effective image-guided method to modify the color of an object without requiring any additional fine-tuning or training. Lastly, we present a benchmark dataset called COLORBENCH, the first benchmark to evaluate the performance of color change methods. Extensive experiments validate the effectiveness of our method in object-level color editing and surpass the performance of popular text-guided image editing approaches in both synthesized and real images.

View on arXiv
@article{yin2025_2411.10232,
  title={ ColorEdit: Training-free Image-Guided Color editing with diffusion model },
  author={ Xingxi Yin and Zhi Li and Jingfeng Zhang and Chenglin Li and Yin Zhang },
  journal={arXiv preprint arXiv:2411.10232},
  year={ 2025 }
}
Comments on this paper