11
0

RichControl: Structure- and Appearance-Rich Training-Free Spatial Control for Text-to-Image Generation

Liheng Zhang
Lexi Pang
Hang Ye
Xiaoxuan Ma
Yizhou Wang
Main:9 Pages
16 Figures
Bibliography:3 Pages
2 Tables
Appendix:8 Pages
Abstract

Text-to-image (T2I) diffusion models have shown remarkable success in generating high-quality images from text prompts. Recent efforts extend these models to incorporate conditional images (e.g., depth or pose maps) for fine-grained spatial control. Among them, feature injection methods have emerged as a training-free alternative to traditional fine-tuning approaches. However, they often suffer from structural misalignment, condition leakage, and visual artifacts, especially when the condition image diverges significantly from natural RGB distributions. By revisiting existing methods, we identify a core limitation: the synchronous injection of condition features fails to account for the trade-off between domain alignment and structural preservation during denoising. Inspired by this observation, we propose a flexible feature injection framework that decouples the injection timestep from the denoising process. At its core is a structure-rich injection module, which enables the model to better adapt to the evolving interplay between alignment and structure preservation throughout the diffusion steps, resulting in more faithful structural generation. In addition, we introduce appearance-rich prompting and a restart refinement strategy to further enhance appearance control and visual quality. Together, these designs enable training-free generation that is both structure-rich and appearance-rich. Extensive experiments show that our approach achieves state-of-the-art performance across diverse zero-shot conditioning scenarios.

View on arXiv
@article{zhang2025_2507.02792,
  title={ RichControl: Structure- and Appearance-Rich Training-Free Spatial Control for Text-to-Image Generation },
  author={ Liheng Zhang and Lexi Pang and Hang Ye and Xiaoxuan Ma and Yizhou Wang },
  journal={arXiv preprint arXiv:2507.02792},
  year={ 2025 }
}
Comments on this paper