ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.08524
46
0

Position-Aware Depth Decay Decoding (D3D^3D3): Boosting Large Language Model Inference Efficiency

11 March 2025
Siqi Fan
Xuezhi Fang
Xingrun Xing
Peng Han
Shuo Shang
Yequan Wang
ArXivPDFHTML
Abstract

Due to the large number of parameters, the inference phase of Large Language Models (LLMs) is resource-intensive. Unlike traditional model compression, which needs retraining, recent dynamic computation methods show that not all components are required for inference, enabling a training-free pipeline. In this paper, we focus on the dynamic depth of LLM generation. A token-position aware layer skipping framework is proposed to save 1.5x times operations efficiently while maintaining performance. We first observed that tokens predicted later have lower perplexity and thus require less computation. Then, we propose a training-free algorithm called Position-Aware Depth Decay Decoding (D3D^3D3), which leverages a power-law decay function, ⌊L×(αi)⌋\left\lfloor L \times (\alpha^i) \right\rfloor⌊L×(αi)⌋, to determine the number of layers to retain when generating token TiT_iTi​. Remarkably, without any retraining, the D3D^3D3 achieves success across a wide range of generation tasks for the first time. Experiments on large language models (\ie the Llama) with 7∼707 \sim 707∼70 billion parameters show that D3D^3D3 can achieve an average 1.5x speedup compared with the full-inference pipeline while maintaining comparable performance with nearly no performance drop (<1%<1\%<1%) on the GSM8K and BBH benchmarks.

View on arXiv
@article{fan2025_2503.08524,
  title={ Position-Aware Depth Decay Decoding ($D^3$): Boosting Large Language Model Inference Efficiency },
  author={ Siqi Fan and Xuezhi Fang and Xingrun Xing and Peng Han and Shuo Shang and Yequan Wang },
  journal={arXiv preprint arXiv:2503.08524},
  year={ 2025 }
}
Comments on this paper