ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2507.18578
158
12
v1v2 (latest)

Wide-In, Narrow-Out: Revokable Decoding for Efficient and Effective DLLMs

24 July 2025
Feng Hong
Geng Yu
Yushi Ye
Haicheng Huang
Huangjie Zheng
Ya Zhang
Yanfeng Wang
Jiangchao Yao
ArXiv (abs)PDFHTMLGithub (27★)
Main:9 Pages
6 Figures
Bibliography:4 Pages
4 Tables
Appendix:3 Pages
Abstract

Diffusion Large Language Models (DLLMs) have emerged as a compelling alternative to Autoregressive models, designed for fast parallel generation. However, existing DLLMs are plagued by a severe quality-speed trade-off, where faster parallel decoding leads to significant performance degradation. We attribute this to the irreversibility of standard decoding in DLLMs, which is easily polarized into the wrong decoding direction along with early error context accumulation. To resolve this, we introduce Wide-In, Narrow-Out (WINO), a training-free decoding algorithm that enables revokable decoding in DLLMs. WINO employs a parallel draft-and-verify mechanism, aggressively drafting multiple tokens while simultaneously using the model's bidirectional context to verify and re-mask suspicious ones for refinement. Verified in open-source DLLMs like LLaDA and MMaDA, WINO is shown to decisively improve the quality-speed trade-off. For instance, on the GSM8K math benchmark, it accelerates inference by 6×\times× while improving accuracy by 2.58%; on Flickr30K captioning, it achieves a 10×\times× speedup with higher performance. More comprehensive experiments are conducted to demonstrate the superiority and provide an in-depth understanding of WINO.

View on arXiv
Comments on this paper