26
5

IterComp: Iterative Composition-Aware Feedback Learning from Model Gallery for Text-to-Image Generation

Xinchen Zhang
Ling Yang
G. Li
Yaqi Cai
Jiake Xie
Yong Tang
Yujiu Yang
Mengdi Wang
Bin Cui
Abstract

Advanced diffusion models like RPG, Stable Diffusion 3 and FLUX have made notable strides in compositional text-to-image generation. However, these methods typically exhibit distinct strengths for compositional generation, with some excelling in handling attribute binding and others in spatial relationships. This disparity highlights the need for an approach that can leverage the complementary strengths of various models to comprehensively improve the composition capability. To this end, we introduce IterComp, a novel framework that aggregates composition-aware model preferences from multiple models and employs an iterative feedback learning approach to enhance compositional generation. Specifically, we curate a gallery of six powerful open-source diffusion models and evaluate their three key compositional metrics: attribute binding, spatial relationships, and non-spatial relationships. Based on these metrics, we develop a composition-aware model preference dataset comprising numerous image-rank pairs to train composition-aware reward models. Then, we propose an iterative feedback learning method to enhance compositionality in a closed-loop manner, enabling the progressive self-refinement of both the base diffusion model and reward models over multiple iterations. Theoretical proof demonstrates the effectiveness and extensive experiments show our significant superiority over previous SOTA methods (e.g., Omost and FLUX), particularly in multi-category object composition and complex semantic alignment. IterComp opens new research avenues in reward feedback learning for diffusion models and compositional generation. Code:this https URL

View on arXiv
@article{zhang2025_2410.07171,
  title={ IterComp: Iterative Composition-Aware Feedback Learning from Model Gallery for Text-to-Image Generation },
  author={ Xinchen Zhang and Ling Yang and Guohao Li and Yaqi Cai and Jiake Xie and Yong Tang and Yujiu Yang and Mengdi Wang and Bin Cui },
  journal={arXiv preprint arXiv:2410.07171},
  year={ 2025 }
}
Comments on this paper