Maximize Your Diffusion: A Study into Reward Maximization and Alignment for Diffusion-based Control

Diffusion-based planning, learning, and control methods present a promising branch of powerful and expressive decision-making solutions. Given the growing interest, such methods have undergone numerous refinements over the past years. However, despite these advancements, existing methods are limited in their investigations regarding general methods for reward maximization within the decision-making process. In this work, we study extensions of fine-tuning approaches for control applications. Specifically, we explore extensions and various design choices for four fine-tuning approaches: reward alignment through reinforcement learning, direct preference optimization, supervised fine-tuning, and cascading diffusion. We optimize their usage to merge these independent efforts into one unified paradigm. We show the utility of such propositions in offline RL settings and demonstrate empirical improvements over a rich array of control tasks.
View on arXiv@article{huh2025_2502.12198, title={ Maximize Your Diffusion: A Study into Reward Maximization and Alignment for Diffusion-based Control }, author={ Dom Huh and Prasant Mohapatra }, journal={arXiv preprint arXiv:2502.12198}, year={ 2025 } }