ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.11062
31
0

CleanUMamba: A Compact Mamba Network for Speech Denoising using Channel Pruning

14 October 2024
Sjoerd Groot
Qinyu Chen
Jan C. van Gemert
Chang Gao
    Mamba
ArXivPDFHTML
Abstract

This paper presents CleanUMamba, a time-domain neural network architecture designed for real-time causal audio denoising directly applied to raw waveforms. CleanUMamba leverages a U-Net encoder-decoder structure, incorporating the Mamba state-space model in the bottleneck layer. By replacing conventional self-attention and LSTM mechanisms with Mamba, our architecture offers superior denoising performance while maintaining a constant memory footprint, enabling streaming operation. To enhance efficiency, we applied structured channel pruning, achieving an 8X reduction in model size without compromising audio quality. Our model demonstrates strong results in the Interspeech 2020 Deep Noise Suppression challenge. Specifically, CleanUMamba achieves a PESQ score of 2.42 and STOI of 95.1% with only 442K parameters and 468M MACs, matching or outperforming larger models in real-time performance. Code will be available at:this https URL

View on arXiv
@article{groot2025_2410.11062,
  title={ CleanUMamba: A Compact Mamba Network for Speech Denoising using Channel Pruning },
  author={ Sjoerd Groot and Qinyu Chen and Jan C. van Gemert and Chang Gao },
  journal={arXiv preprint arXiv:2410.11062},
  year={ 2025 }
}
Comments on this paper