While many diffusion models perform well when controlling for particular aspect among style, character, and interaction, they struggle with fine-grained control due to dataset limitations and intricate model architecture design. This paper first introduces a novel training-free algorithm in fine-grained generation, Aggregation of Multiple Diffusion Models (AMDM), which integrates features from multiple diffusion models into a specified model to activate specific features and enable fine-grained control. Experimental results demonstrate that AMDM significantly improves fine-grained control without training, validating its effectiveness. Additionally, it reveals that diffusion models initially focus on features such as position, attributes, and style, with later stages improving generation quality and consistency. AMDM offers a new perspective for tackling the challenges of fine-grained conditional control generation in diffusion models: We can fully utilize existing or develop new conditional diffusion models that control specific aspects, and then aggregate them using AMDM algorithm. This eliminates the need for constructing complex datasets, designing intricate model architectures, and incurring high training costs. Code is available at:this https URL.
View on arXiv@article{yue2025_2410.01262, title={ Improving Fine-Grained Control via Aggregation of Multiple Diffusion Models }, author={ Conghan Yue and Zhengwei Peng and Shiyan Du and Zhi Ji and Chuangjian Cai and Le Wan and Dongyu Zhang }, journal={arXiv preprint arXiv:2410.01262}, year={ 2025 } }