60
1

Diffusion-Sharpening: Fine-tuning Diffusion Models with Denoising Trajectory Sharpening

Abstract

We propose Diffusion-Sharpening, a fine-tuning approach that enhances downstream alignment by optimizing sampling trajectories. Existing RL-based fine-tuning methods focus on single training timesteps and neglect trajectory-level alignment, while recent sampling trajectory optimization methods incur significant inference NFE costs. Diffusion-Sharpening overcomes this by using a path integral framework to select optimal trajectories during training, leveraging reward feedback, and amortizing inference costs. Our method demonstrates superior training efficiency with faster convergence, and best inference efficiency without requiring additional NFEs. Extensive experiments show that Diffusion-Sharpening outperforms RL-based fine-tuning methods (e.g., Diffusion-DPO) and sampling trajectory optimization methods (e.g., Inference Scaling) across diverse metrics including text alignment, compositional capabilities, and human preferences, offering a scalable and efficient solution for future diffusion model fine-tuning. Code:this https URL

View on arXiv
@article{tian2025_2502.12146,
  title={ Diffusion-Sharpening: Fine-tuning Diffusion Models with Denoising Trajectory Sharpening },
  author={ Ye Tian and Ling Yang and Xinchen Zhang and Yunhai Tong and Mengdi Wang and Bin Cui },
  journal={arXiv preprint arXiv:2502.12146},
  year={ 2025 }
}
Comments on this paper