ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.22608
5
0

Effective and Efficient One-pass Compression of Speech Foundation Models Using Sparsity-aware Self-pinching Gates

28 May 2025
Haoning Xu
Zhaoqing Li
Youjun Chen
Huimeng Wang
Guinan Li
Mengzhe Geng
Chengxi Deng
Xunying Liu
ArXivPDFHTML
Abstract

This paper presents a novel approach for speech foundation models compression that tightly integrates model pruning and parameter update into a single stage. Highly compact layer-level tied self-pinching gates each containing only a single learnable threshold are jointly trained with uncompressed models and used in fine-grained neuron level pruning. Experiments conducted on the LibriSpeech-100hr corpus suggest that our approach reduces the number of parameters of wav2vec2.0-base and HuBERT-large models by 65% and 60% respectively, while incurring no statistically significant word error rate (WER) increase on the test-clean dataset. Compared to previously published methods on the same task, our approach not only achieves the lowest WER of 7.05% on the test-clean dataset under a comparable model compression ratio of 4.26x, but also operates with at least 25% less model compression time.

View on arXiv
@article{xu2025_2505.22608,
  title={ Effective and Efficient One-pass Compression of Speech Foundation Models Using Sparsity-aware Self-pinching Gates },
  author={ Haoning Xu and Zhaoqing Li and Youjun Chen and Huimeng Wang and Guinan Li and Mengzhe Geng and Chengxi Deng and Xunying Liu },
  journal={arXiv preprint arXiv:2505.22608},
  year={ 2025 }
}
Comments on this paper