ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.04267
  4. Cited By
Distil-DCCRN: A Small-footprint DCCRN Leveraging Feature-based Knowledge
  Distillation in Speech Enhancement

Distil-DCCRN: A Small-footprint DCCRN Leveraging Feature-based Knowledge Distillation in Speech Enhancement

IEEE Signal Processing Letters (SPL), 2024
8 August 2024
Runduo Han
Weiming Xu
Zihan Zhang
Mingshuai Liu
Lei Xie
ArXiv (abs)PDFHTML

Papers citing "Distil-DCCRN: A Small-footprint DCCRN Leveraging Feature-based Knowledge Distillation in Speech Enhancement"

3 / 3 papers shown
CabinSep: IR-Augmented Mask-Based MVDR for Real-Time In-Car Speech Separation with Distributed Heterogeneous Arrays
CabinSep: IR-Augmented Mask-Based MVDR for Real-Time In-Car Speech Separation with Distributed Heterogeneous Arrays
Runduo Han
Yanxin Hu
Yihui Fu
Zihan Zhang
Yukai Jv
Li Chen
Lei Xie
76
0
0
01 Sep 2025
I$^2$RF-TFCKD: Intra-Inter Representation Fusion with Time-Frequency Calibration Knowledge Distillation for Speech Enhancement
I2^22RF-TFCKD: Intra-Inter Representation Fusion with Time-Frequency Calibration Knowledge Distillation for Speech Enhancement
Jiaming Cheng
Ruiyu Liang
Chao Xu
Chao Xu
Jing Li
Wei Zhou
Rui Liu
Björn W. Schuller
Xiaoshuai Hao
184
0
0
16 Jun 2025
Knowledge Distillation for Speech Denoising by Latent Representation Alignment with Cosine Distance
Knowledge Distillation for Speech Denoising by Latent Representation Alignment with Cosine Distance
Diep Luong
Mikko Heikkinen
Konstantinos Drossos
Maria Sandsten
355
0
0
06 May 2025
1