ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.24545
22
0

Pretraining Multi-Speaker Identification for Neural Speaker Diarization

30 May 2025
Shota Horiguchi
Atsushi Ando
Marc Delcroix
Naohiro Tawara
ArXiv (abs)PDFHTML
Main:4 Pages
1 Figures
Bibliography:1 Pages
4 Tables
Abstract

End-to-end speaker diarization enables accurate overlap-aware diarization by jointly estimating multiple speakers' speech activities in parallel. This approach is data-hungry, requiring a large amount of labeled conversational data, which cannot be fully obtained from real datasets alone. To address this issue, large-scale simulated data is often used for pretraining, but it requires enormous storage and I/O capacity, and simulating data that closely resembles real conversations remains challenging. In this paper, we propose pretraining a model to identify multiple speakers from an input fully overlapped mixture as an alternative to pretraining a diarization model. This method eliminates the need to prepare a large-scale simulated dataset while leveraging large-scale speaker recognition datasets for training. Through comprehensive experiments, we demonstrate that the proposed method enables a highly accurate yet lightweight local diarization model without simulated conversational data.

View on arXiv
@article{horiguchi2025_2505.24545,
  title={ Pretraining Multi-Speaker Identification for Neural Speaker Diarization },
  author={ Shota Horiguchi and Atsushi Ando and Marc Delcroix and Naohiro Tawara },
  journal={arXiv preprint arXiv:2505.24545},
  year={ 2025 }
}
Comments on this paper