ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.07977
19
23

An Uncertainty-Aware, Shareable and Transparent Neural Network Architecture for Brain-Age Modeling

16 July 2021
Tim Hahn
J. Ernsting
N. Winter
V. Holstein
Ramona Leenings
Marie Beisemann
L. Fisch
K. Sarink
D. Emden
N. Opel
R. Redlich
J. Repple
D. Grotegerd
S. Meinert
J. Hirsch
Thoralf Niendorf
B. Endemann
F. Bamberg
Thomas Kroncke
Robin Bülow
H. Völzke
O. Stackelberg
R. Sowade
L. Umutlu
B. Schmidt
S. Caspers
German National Cohort Study Center Consortium
H. Kugel
T. Kircher
Benjamin Risse
Christian Gaser
James H. Cole
U. Dannlowski
Klaus Berger
    OOD
ArXivPDFHTML
Abstract

The deviation between chronological age and age predicted from neuroimaging data has been identified as a sensitive risk-marker of cross-disorder brain changes, growing into a cornerstone of biological age-research. However, Machine Learning models underlying the field do not consider uncertainty, thereby confounding results with training data density and variability. Also, existing models are commonly based on homogeneous training sets, often not independently validated, and cannot be shared due to data protection issues. Here, we introduce an uncertainty-aware, shareable, and transparent Monte-Carlo Dropout Composite-Quantile-Regression (MCCQR) Neural Network trained on N=10,691 datasets from the German National Cohort. The MCCQR model provides robust, distribution-free uncertainty quantification in high-dimensional neuroimaging data, achieving lower error rates compared to existing models across ten recruitment centers and in three independent validation samples (N=4,004). In two examples, we demonstrate that it prevents spurious associations and increases power to detect accelerated brain-aging. We make the pre-trained model publicly available.

View on arXiv
Comments on this paper