ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.00950
28
0

Crowdsourcing MUSHRA Tests in the Age of Generative Speech Technologies: A Comparative Analysis of Subjective and Objective Testing Methods

1 June 2025
Laura Lechler
Chamran Moradi
Ivana Balic
ArXiv (abs)PDFHTML
Main:4 Pages
2 Figures
Bibliography:1 Pages
2 Tables
Abstract

The MUSHRA framework is widely used for detecting subtle audio quality differences but traditionally relies on expert listeners in controlled environments, making it costly and impractical for model development. As a result, objective metrics are often used during development, with expert evaluations conducted later. While effective for traditional DSP codecs, these metrics often fail to reliably evaluate generative models. This paper proposes adaptations for conducting MUSHRA tests with non-expert, crowdsourced listeners, focusing on generative speech codecs. We validate our approach by comparing results from MTurk and Prolific crowdsourcing platforms with expert listener data, assessing test-retest reliability and alignment. Additionally, we evaluate six objective metrics, showing that traditional metrics undervalue generative models. Our findings reveal platform-specific biases and emphasize codec-aware metrics, offering guidance for scalable perceptual testing of speech codecs.

View on arXiv
@article{lechler2025_2506.00950,
  title={ Crowdsourcing MUSHRA Tests in the Age of Generative Speech Technologies: A Comparative Analysis of Subjective and Objective Testing Methods },
  author={ Laura Lechler and Chamran Moradi and Ivana Balic },
  journal={arXiv preprint arXiv:2506.00950},
  year={ 2025 }
}
Comments on this paper