A Dual-Stage Time-Context Network for Speech-Based Alzheimer's Disease Detection

Alzheimer's disease (AD) is a progressive neurodegenerative disorder that leads to irreversible cognitive decline in memory and communication. Early detection of AD through speech analysis is crucial for delaying disease progression. However, existing methods mainly use pre-trained acoustic models for feature extraction but have limited ability to model both local and global patterns in long-duration speech. In this letter, we introduce a Dual-Stage Time-Context Network (DSTC-Net) for speech-based AD detection, integrating local acoustic features with global conversational context in long-durationthis http URLfirst partition each long-duration recording into fixed-length segments to reduce computational overhead and preserve local temporalthis http URL, we feed these segments into an Intra-Segment Temporal Attention (ISTA) module, where a bidirectional Long Short-Term Memory (BiLSTM) network with frame-level attention extracts enhanced localthis http URL, a Cross-Segment Context Attention (CSCA) module applies convolution-based context modeling and adaptive attention to unify global patterns across allthis http URLexperiments on the ADReSSo dataset show that our DSTC-Net outperforms state-of-the-art models, reaching 83.10% accuracy and 83.15% F1.
View on arXiv@article{gao2025_2502.13064, title={ A Dual-Stage Time-Context Network for Speech-Based Alzheimer's Disease Detection }, author={ Yifan Gao and Long Guo and Hong Liu }, journal={arXiv preprint arXiv:2502.13064}, year={ 2025 } }