44
0

V2C-Long: Longitudinal Cortex Reconstruction with Spatiotemporal Correspondence

Abstract

Reconstructing the cortex from longitudinal magnetic resonance imaging (MRI) is indispensable for analyzing morphological alterations in the human brain. Despite the recent advancement of cortical surface reconstruction with deep learning, challenges arising from longitudinal data are still persistent. Especially the lack of strong spatiotemporal point correspondence between highly convoluted brain surfaces hinders downstream analyses, as local morphology is not directly comparable if the anatomical location is not matched precisely. To address this issue, we present V2C-Long, the first dedicated deep learning-based cortex reconstruction method for longitudinal MRI. V2C-Long exhibits strong inherent spatiotemporal correspondence across subjects and visits, thereby reducing the need for surface-based post-processing. We establish this correspondence directly during the reconstruction via the composition of two deep template-deformation networks and innovative aggregation of within-subject templates in mesh space. We validate V2C-Long on two large neuroimaging studies, focusing on surface accuracy, consistency, generalization, test-retest reliability, and sensitivity. The results reveal a substantial improvement in longitudinal consistency and accuracy compared to existing methods. In addition, we demonstrate stronger evidence for longitudinal cortical atrophy in Alzheimer's disease than longitudinal FreeSurfer.

View on arXiv
@article{bongratz2025_2402.17438,
  title={ V2C-Long: Longitudinal Cortex Reconstruction with Spatiotemporal Correspondence },
  author={ Fabian Bongratz and Jan Fecht and Anne-Marie Rickmann and Christian Wachinger },
  journal={arXiv preprint arXiv:2402.17438},
  year={ 2025 }
}
Comments on this paper