ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2511.13649
388
0
v1v2v3 (latest)

Distribution Matching Distillation Meets Reinforcement Learning

17 November 2025
Dengyang Jiang
Dongyang Liu
Zanyi Wang
Qilong Wu
Liuzhuozheng Li
Hengzhuang Li
Xin Jin
David Liu
Z. Li
Bo Zhang
Mengmeng Wang
Steven Hoi
Peng Gao
H. Yang
ArXiv (abs)PDFHTMLGithub (97★)
Main:9 Pages
13 Figures
Bibliography:3 Pages
7 Tables
Appendix:2 Pages
Abstract

Distribution Matching Distillation (DMD) distills a pre-trained multi-step diffusion model to a few-step one to improve inference efficiency. However, the performance of the latter is often capped by the former. To circumvent this dilemma, we propose DMDR, a novel framework that combines Reinforcement Learning (RL) techniques into the distillation process. We show that for the RL of the few-step generator, the DMD loss itself is a more effective regularization compared to the traditional ones. In turn, RL can help to guide the mode coverage process in DMD more effectively. These allow us to unlock the capacity of the few-step generator by conducting distillation and RL simultaneously. Meanwhile, we design the dynamic distribution guidance and dynamic renoise sampling training strategies to improve the initial distillation process. The experiments demonstrate that DMDR can achieve leading visual quality, prompt coherence among few-step methods, and even exhibit performance that exceeds the multi-step teacher.

View on arXiv
Comments on this paper