ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.00938
6
29

Speaker detection in the wild: Lessons learned from JSALT 2019

2 December 2019
Leibny Paola García-Perera
Jesus Villalba
H. Bredin
Jun Du
Diego Castán
Alejandrina Cristià
Latané Bullock
Ling Guo
K. Okabe
P. S. Nidadavolu
Saurabh Kataria
Sizhu Chen
Léo Galmant
Marvin Lavechin
Lei Sun
Marie-Philippe Gill
Bar Ben Yair
Sajjad Abdoli
Xin Wang
Wassim Bouaziz
Hadrien Titeux
Emmanuel Dupoux
Kong Aik Lee
Najim Dehak
ArXivPDFHTML
Abstract

This paper presents the problems and solutions addressed at the JSALT workshop when using a single microphone for speaker detection in adverse scenarios. The main focus was to tackle a wide range of conditions that go from meetings to wild speech. We describe the research threads we explored and a set of modules that was successful for these scenarios. The ultimate goal was to explore speaker detection; but our first finding was that an effective diarization improves detection, and not having a diarization stage impoverishes the performance. All the different configurations of our research agree on this fact and follow a main backbone that includes diarization as a previous stage. With this backbone, we analyzed the following problems: voice activity detection, how to deal with noisy signals, domain mismatch, how to improve the clustering; and the overall impact of previous stages in the final speaker detection. In this paper, we show partial results for speaker diarizarion to have a better understanding of the problem and we present the final results for speaker detection.

View on arXiv
Comments on this paper