We describe the design and results from the BraTS 2023 Intracranial Meningioma Segmentation Challenge. The BraTS Meningioma Challenge differed from prior BraTS Glioma challenges in that it focused on meningiomas, which are typically benign extra-axial tumors with diverse radiologic and anatomical presentation and a propensity for multiplicity. Nine participating teams each developed deep-learning automated segmentation models using image data from the largest multi-institutional systematically expert annotated multilabel multi-sequence meningioma MRI dataset to date, which included 1000 training set cases, 141 validation set cases, and 283 hidden test set cases. Each case included T2, FLAIR, T1, and T1Gd brain MRI sequences with associated tumor compartment labels delineating enhancing tumor, non-enhancing tumor, and surrounding non-enhancing FLAIR hyperintensity. Participant automated segmentation models were evaluated and ranked based on a scoring system evaluating lesion-wise metrics including dice similarity coefficient (DSC) and 95% Hausdorff Distance. The top ranked team had a lesion-wise median dice similarity coefficient (DSC) of 0.976, 0.976, and 0.964 for enhancing tumor, tumor core, and whole tumor, respectively and a corresponding average DSC of 0.899, 0.904, and 0.871, respectively. These results serve as state-of-the-art benchmarks for future pre-operative meningioma automated segmentation algorithms. Additionally, we found that 1286 of 1424 cases (90.3%) had at least 1 compartment voxel abutting the edge of the skull-stripped image edge, which requires further investigation into optimal pre-processing face anonymization steps.
View on arXiv@article{labella2025_2405.09787, title={ Analysis of the BraTS 2023 Intracranial Meningioma Segmentation Challenge }, author={ Dominic LaBella and Ujjwal Baid and Omaditya Khanna and Shan McBurney-Lin and Ryan McLean and Pierre Nedelec and Arif Rashid and Nourel Hoda Tahon and Talissa Altes and Radhika Bhalerao and Yaseen Dhemesh and Devon Godfrey and Fathi Hilal and Scott Floyd and Anastasia Janas and Anahita Fathi Kazerooni and John Kirkpatrick and Collin Kent and Florian Kofler and Kevin Leu and Nazanin Maleki and Bjoern Menze and Maxence Pajot and Zachary J. Reitman and Jeffrey D. Rudie and Rachit Saluja and Yury Velichko and Chunhao Wang and Pranav Warman and Maruf Adewole and Jake Albrecht and Udunna Anazodo and Syed Muhammad Anwar and Timothy Bergquist and Sully Francis Chen and Verena Chung and Rong Chai and Gian-Marco Conte and Farouk Dako and James Eddy and Ivan Ezhov and Nastaran Khalili and Juan Eugenio Iglesias and Zhifan Jiang and Elaine Johanson and Koen Van Leemput and Hongwei Bran Li and Marius George Linguraru and Xinyang Liu and Aria Mahtabfar and Zeke Meier and Ahmed W. Moawad and John Mongan and Marie Piraud and Russell Takeshi Shinohara and Walter F. Wiggins and Aly H. Abayazeed and Rachel Akinola and András Jakab and Michel Bilello and Maria Correia de Verdier and Priscila Crivellaro and Christos Davatzikos and Keyvan Farahani and John Freymann and Christopher Hess and Raymond Huang and Philipp Lohmann and Mana Moassefi and Matthew W. Pease and Phillipp Vollmuth and Nico Sollmann and David Diffley and Khanak K. Nandolia and Daniel I. Warren and Ali Hussain and Pascal Fehringer and Yulia Bronstein and Lisa Deptula and Evan G. Stein and Mahsa Taherzadeh and Eduardo Portela de Oliveira and Aoife Haughey and Marinos Kontzialis and Luca Saba and Benjamin Turner and Melanie M. T. Brüßeler and Shehbaz Ansari and Athanasios Gkampenis and David Maximilian Weiss and Aya Mansour and Islam H. Shawali and Nikolay Yordanov and Joel M. Stein and Roula Hourani and Mohammed Yahya Moshebah and Ahmed Magdy Abouelatta and Tanvir Rizvi and Klara Willms and Dann C. Martin }, journal={arXiv preprint arXiv:2405.09787}, year={ 2025 } }