ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
  • Feedback
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2508.10736
32
1

Thinking Inside the Mask: In-Place Prompting in Diffusion LLMs

14 August 2025
Xiangqi Jin
Y. Wang
Yifeng Gao
Zichen Wen
Biqing Qi
Dongrui Liu
Linfeng Zhang
    LRM
ArXiv (abs)PDFHTML
Main:7 Pages
7 Figures
Bibliography:2 Pages
3 Tables
Appendix:1 Pages
Abstract

Despite large language models (LLMs) have achieved remarkable success, their prefix-only prompting paradigm and sequential generation process offer limited flexibility for bidirectional information. Diffusion large language models (dLLMs) present new opportunities through their bidirectional attention mechanisms and iterative refinement processes, enabling more flexible in-place prompting strategies. We introduce ICE (In-Place Chain-of-Thought Prompting with Early Exit), a novel framework that transforms prefix-only prompting into in-place prompting specifically designed for dLLMs. ICE integrates in-place prompts directly within masked token positions during iterative refinement and employs a confidence-aware early exit mechanism to significantly reduce computational overhead. Extensive experiments demonstrate ICE's effectiveness, achieving up to 17.29% accuracy improvement with 4.12×\times× speedup on GSM8K, and up to 276.67×\times× acceleration on MMLU while maintaining competitive performance.

View on arXiv
Comments on this paper