864
v1v2 (latest)

Sparse Logit Sampling: Accelerating Knowledge Distillation in LLMs

Annual Meeting of the Association for Computational Linguistics (ACL), 2025
Main:9 Pages
6 Figures
Bibliography:6 Pages
22 Tables
Appendix:9 Pages
Abstract

Knowledge distillation can be a cost-effective technique to distill knowledge in Large Language Models, if the teacher output logits can be pre-computed and cached. However, successfully applying this to pre-training remains largely unexplored. In this work, we prove that naive approaches for sparse knowledge distillation such as caching Top-K probabilities, while intuitive, provide biased estimates of teacher probability distribution to the student, resulting in suboptimal performance and calibration. We propose an importance-sampling-based method `Random Sampling Knowledge Distillation', which provides unbiased estimates, preserves the gradient in expectation, and requires storing significantly sparser logits. Our method enables faster training of student models with marginal overhead (<10%) compared to cross-entropy based training, while maintaining competitive performance compared to full distillation, across a range of model sizes from 300M to 3B.

View on arXiv
Comments on this paper