54
2

The 2025 PNPL Competition: Speech Detection and Phoneme Classification in the LibriBrain Dataset

Main:9 Pages
1 Figures
Bibliography:3 Pages
2 Tables
Abstract

The advance of speech decoding from non-invasive brain data holds the potential for profound societal impact. Among its most promising applications is the restoration of communication to paralysed individuals affected by speech deficits such as dysarthria, without the need for high-risk surgical interventions. The ultimate aim of the 2025 PNPL competition is to produce the conditions for an "ImageNet moment" or breakthrough in non-invasive neural decoding, by harnessing the collective power of the machine learning community.To facilitate this vision we present the largest within-subject MEG dataset recorded to date (LibriBrain) together with a user-friendly Python library (pnpl) for easy data access and integration with deep learning frameworks. For the competition we define two foundational tasks (i.e. Speech Detection and Phoneme Classification from brain data), complete with standardised data splits and evaluation metrics, illustrative benchmark models, online tutorial code, a community discussion board, and public leaderboard for submissions. To promote accessibility and participation the competition features a Standard track that emphasises algorithmic innovation, as well as an Extended track that is expected to reward larger-scale computing, accelerating progress toward a non-invasive brain-computer interface for speech.

View on arXiv
@article{landau2025_2506.10165,
  title={ The 2025 PNPL Competition: Speech Detection and Phoneme Classification in the LibriBrain Dataset },
  author={ Gilad Landau and Miran Özdogan and Gereon Elvers and Francesco Mantegna and Pratik Somaiya and Dulhan Jayalath and Luisa Kurth and Teyun Kwon and Brendan Shillingford and Greg Farquhar and Minqi Jiang and Karim Jerbi and Hamza Abdelhedi and Yorguin Mantilla Ramos and Caglar Gulcehre and Mark Woolrich and Natalie Voets and Oiwi Parker Jones },
  journal={arXiv preprint arXiv:2506.10165},
  year={ 2025 }
}
Comments on this paper