Feature Alignment and Representation Transfer in Knowledge Distillation for Large Language Models

Knowledge distillation (KD) is a technique for transferring knowledge from complex teacher models to simpler student models, significantly enhancing model efficiency and accuracy. It has demonstrated substantial advancements in various applications including image classification, object detection, language modeling, text classification, and sentiment analysis. Recent innovations in KD methods, such as attention-based approaches, block-wise logit distillation, and decoupling distillation, have notably improved student model performance. These techniques focus on stimulus complexity, attention mechanisms, and global information capture to optimize knowledge transfer. In addition, KD has proven effective in compressing large language models while preserving accuracy, reducing computational overhead, and improving inference speed. This survey synthesizes the latest literature, highlighting key findings, contributions, and future directions in knowledge distillation to provide insights for researchers and practitioners on its evolving role in artificial intelligence and machine learning.
View on arXiv@article{yang2025_2504.13825, title={ Feature Alignment and Representation Transfer in Knowledge Distillation for Large Language Models }, author={ Junjie Yang and Junhao Song and Xudong Han and Ziqian Bi and Tianyang Wang and Chia Xin Liang and Xinyuan Song and Yichao Zhang and Qian Niu and Benji Peng and Keyu Chen and Ming Liu }, journal={arXiv preprint arXiv:2504.13825}, year={ 2025 } }