15
0
v1v2 (latest)

FAMA: The First Large-Scale Open-Science Speech Foundation Model for English and Italian

Main:10 Pages
1 Figures
Bibliography:4 Pages
4 Tables
Abstract

The development of speech foundation models (SFMs) like Whisper and SeamlessM4T has significantly advanced the field of speech processing. However, their closed nature--with inaccessible training data and code--poses major reproducibility and fair evaluation challenges. While other domains have made substantial progress toward open science by developing fully transparent models trained on open-source (OS) code and data, similar efforts in speech remain limited. To fill this gap, we introduce FAMA, the first family of open science SFMs for English and Italian, trained on 150k+ hours of OS speech data. Moreover, we present a new dataset containing 16k hours of cleaned and pseudo-labeled speech for both languages. Results show that FAMA achieves competitive performance compared to existing SFMs while being up to 8 times faster. All artifacts, including code, datasets, and models, are released under OS-compliant licenses, promoting openness in speech technology research.

View on arXiv
@article{papi2025_2505.22759,
  title={ FAMA: The First Large-Scale Open-Science Speech Foundation Model for English and Italian },
  author={ Sara Papi and Marco Gaido and Luisa Bentivogli and Alessio Brutti and Mauro Cettolo and Roberto Gretter and Marco Matassoni and Mohamed Nabih and Matteo Negri },
  journal={arXiv preprint arXiv:2505.22759},
  year={ 2025 }
}
Comments on this paper