End-to-end speaker diarization enables accurate overlap-aware diarization by jointly estimating multiple speakers' speech activities in parallel. This approach is data-hungry, requiring a large amount of labeled conversational data, which cannot be fully obtained from real datasets alone. To address this issue, large-scale simulated data is often used for pretraining, but it requires enormous storage and I/O capacity, and simulating data that closely resembles real conversations remains challenging. In this paper, we propose pretraining a model to identify multiple speakers from an input fully overlapped mixture as an alternative to pretraining a diarization model. This method eliminates the need to prepare a large-scale simulated dataset while leveraging large-scale speaker recognition datasets for training. Through comprehensive experiments, we demonstrate that the proposed method enables a highly accurate yet lightweight local diarization model without simulated conversational data.
View on arXiv@article{horiguchi2025_2505.24545, title={ Pretraining Multi-Speaker Identification for Neural Speaker Diarization }, author={ Shota Horiguchi and Atsushi Ando and Marc Delcroix and Naohiro Tawara }, journal={arXiv preprint arXiv:2505.24545}, year={ 2025 } }