ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.14366
14
0

Empirical Evaluation of Knowledge Distillation from Transformers to Subquadratic Language Models

19 April 2025
Patrick Haller
Jonas Golde
Alan Akbik
ArXivPDFHTML
Abstract

Knowledge distillation is a widely used technique for compressing large language models (LLMs) by training a smaller student model to mimic a larger teacher model. Typically, both the teacher and student are Transformer-based architectures, leveraging softmax attention for sequence modeling. However, the quadratic complexity of self-attention at inference time remains a significant bottleneck, motivating the exploration of subquadratic alternatives such as structured state-space models (SSMs), linear attention, and recurrent architectures. In this work, we systematically evaluate the transferability of knowledge distillation from a Transformer teacher to nine subquadratic student architectures. Our study aims to determine which subquadratic model best aligns with the teacher's learned representations and how different architectural constraints influence the distillation process. We also investigate the impact of intelligent initialization strategies, including matrix mixing and query-key-value (QKV) copying, on the adaptation process. Our empirical results on multiple NLP benchmarks provide insights into the trade-offs between efficiency and performance, highlighting key factors for successful knowledge transfer to subquadratic architectures.

View on arXiv
@article{haller2025_2504.14366,
  title={ Empirical Evaluation of Knowledge Distillation from Transformers to Subquadratic Language Models },
  author={ Patrick Haller and Jonas Golde and Alan Akbik },
  journal={arXiv preprint arXiv:2504.14366},
  year={ 2025 }
}
Comments on this paper