Seamless Dysfluent Speech Text Alignment for Disordered Speech Analysis

Accurate alignment of dysfluent speech with intended text is crucial for automating the diagnosis of neurodegenerative speech disorders. Traditional methods often fail to model phoneme similarities effectively, limiting their performance. In this work, we propose Neural LCS, a novel approach for dysfluent text-text and speech-text alignment. Neural LCS addresses key challenges, including partial alignment and context-aware similarity mapping, by leveraging robust phoneme-level modeling. We evaluate our method on a large-scale simulated dataset, generated using advanced data simulation techniques, and real PPA data. Neural LCS significantly outperforms state-of-the-art models in both alignment accuracy and dysfluent speech segmentation. Our results demonstrate the potential of Neural LCS to enhance automated systems for diagnosing and analyzing speech disorders, offering a more accurate and linguistically grounded solution for dysfluent speech alignment.
View on arXiv@article{ye2025_2506.12073, title={ Seamless Dysfluent Speech Text Alignment for Disordered Speech Analysis }, author={ Zongli Ye and Jiachen Lian and Xuanru Zhou and Jinming Zhang and Haodong Li and Shuhe Li and Chenxu Guo and Anaisha Das and Peter Park and Zoe Ezzes and Jet Vonk and Brittany Morin and Rian Bogley and Lisa Wauters and Zachary Miller and Maria Gorno-Tempini and Gopala Anumanchipalli }, journal={arXiv preprint arXiv:2506.12073}, year={ 2025 } }