Adding Conditional Control to Diffusion Models with Reinforcement Learning

Diffusion models are powerful generative models that allow for precise control over the characteristics of the generated samples. While these diffusion models trained on large datasets have achieved success, there is often a need to introduce additional controls in downstream fine-tuning processes, treating these powerful models as pre-trained diffusion models. This work presents a novel method based on reinforcement learning (RL) to add such controls using an offline dataset comprising inputs and labels. We formulate this task as an RL problem, with the classifier learned from the offline dataset and the KL divergence against pre-trained models serving as the reward functions. Our method, (onditioning pre-rained diffusion models with einforcement earning), produces soft-optimal policies that maximize the abovementioned reward functions. We formally demonstrate that our method enables sampling from the conditional distribution with additional controls during inference. Our RL-based approach offers several advantages over existing methods. Compared to classifier-free guidance, it improves sample efficiency and can greatly simplify dataset construction by leveraging conditional independence between the inputs and additional controls. Additionally, unlike classifier guidance, it eliminates the need to train classifiers from intermediate states to additional controls. The code is available atthis https URL.
View on arXiv@article{zhao2025_2406.12120, title={ Adding Conditional Control to Diffusion Models with Reinforcement Learning }, author={ Yulai Zhao and Masatoshi Uehara and Gabriele Scalia and Sunyuan Kung and Tommaso Biancalani and Sergey Levine and Ehsan Hajiramezanali }, journal={arXiv preprint arXiv:2406.12120}, year={ 2025 } }