ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
  • Feedback
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2509.23094
0
0

d2^22Cache: Accelerating Diffusion-Based LLMs via Dual Adaptive Caching

27 September 2025
Yuchu Jiang
Yue Cai
Xiangzhong Luo
Jiale Fu
Jiarui Wang
Chonghan Liu
Xu Yang
ArXiv (abs)PDFHTMLGithub (7★)
Main:9 Pages
9 Figures
Bibliography:3 Pages
5 Tables
Appendix:5 Pages
Abstract

Diffusion-based large language models (dLLMs), despite their promising performance, still suffer from inferior inference efficiency. This is because dLLMs rely on bidirectional attention and cannot directly benefit from the standard key-value (KV) cache as autoregressive models (ARMs) do. To tackle this issue, we introduce \textit{Dual aDaptive Cache} (d2^22Cache), which is a training-free approximate KV cache framework for accelerating dLLM inference. d2^22Cache features a two-stage fine-grained selection strategy to identify tokens and adaptively update their KV states at each decoding step, while caching the KV states of the remaining tokens for reuse. Furthermore, d2^22Cache naturally offers a more reliable decoding alternative, which can enable quasi left-to-right generation and mitigate premature overconfidence in tokens at the end of the sequence. Extensive experimental results on two representative dLLMs (\ie, LLaDA and Dream) demonstrate that d2^22Cache not only achieves substantial inference speedups, but also yields consistent improvements in generation quality. The code is available atthis https URL.

View on arXiv
Comments on this paper