SD: Self-Distilled Sparse Drafters

Speculative decoding is a powerful technique for reducing the latency of Large Language Models (LLMs), offering a fault-tolerant framework that enables the use of highly compressed draft models. In this work, we introduce Self-Distilled Sparse Drafters (SD), a novel methodology that leverages self-data distillation and fine-grained weight sparsity to produce highly efficient and well-aligned draft models. SD systematically enhances draft token acceptance rates while significantly reducing Multiply-Accumulate operations (MACs), even in the Universal Assisted Generation (UAG) setting, where draft and target models originate from different model families. On a Llama-3.1-70B target model, SD provides a 1.59 higher Mean Accepted Length (MAL) compared to layer-pruned draft models and reduces MACs by over 43.87% with a 8.36% reduction in MAL compared to a dense draft models. Our results highlight the potential of sparsity-aware fine-tuning and compression strategies to improve LLM inference efficiency while maintaining alignment with target models.
View on arXiv@article{lasby2025_2504.08838, title={ SD$^2$: Self-Distilled Sparse Drafters }, author={ Mike Lasby and Nish Sinnadurai and Valavan Manohararajah and Sean Lie and Vithursan Thangarasa }, journal={arXiv preprint arXiv:2504.08838}, year={ 2025 } }