ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
  • Feedback
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2508.16576
22
0

Benchmarking Training Paradigms, Dataset Composition, and Model Scaling for Child ASR in ESPnet

22 August 2025
Anyu Ying
Natarajan Balaji Shankar
Chyi-Jiunn Lin
Mohan Shi
Pu Wang
Hye-jin Shim
Siddhant Arora
Hugo Van hamme
Abeer Alwan
Shinji Watanabe
ArXiv (abs)PDFHTMLGithub (9447★)
Main:4 Pages
1 Figures
Bibliography:1 Pages
7 Tables
Abstract

Despite advancements in ASR, child speech recognition remains challenging due to acoustic variability and limited annotated data. While fine-tuning adult ASR models on child speech is common, comparisons with flat-start training remain underexplored. We compare flat-start training across multiple datasets, SSL representations (WavLM, XEUS), and decoder architectures. Our results show that SSL representations are biased toward adult speech, with flat-start training on child speech mitigating these biases. We also analyze model scaling, finding consistent improvements up to 1B parameters, beyond which performance plateaus. Additionally, age-related ASR and speaker verification analysis highlights the limitations of proprietary models like Whisper, emphasizing the need for open-data models for reliable child speech research. All investigations are conducted using ESPnet, and our publicly available benchmark provides insights into training strategies for robust child speech processing.

View on arXiv
Comments on this paper