Performance Analysis of Deep Learning Models for Femur Segmentation in MRI Scan

Convolutional neural networks like U-Net excel in medical image segmentation, while attention mechanisms and KAN enhance feature extraction. Meta's SAM 2 uses Vision Transformers for prompt-based segmentation without fine-tuning. However, biases in these models impact generalization with limited data. In this study, we systematically evaluate and compare the performance of three CNN-based models, i.e., U-Net, Attention U-Net, and U-KAN, and one transformer-based model, i.e., SAM 2 for segmenting femur bone structures in MRI scan. The dataset comprises 11,164 MRI scans with detailed annotations of femoral regions. Performance is assessed using the Dice Similarity Coefficient, which ranges from 0.932 to 0.954. Attention U-Net achieves the highest overall scores, while U-KAN demonstrated superior performance in anatomical regions with a smaller region of interest, leveraging its enhanced learning capacity to improve segmentation accuracy.
View on arXiv@article{liu2025_2504.04066, title={ Performance Analysis of Deep Learning Models for Femur Segmentation in MRI Scan }, author={ Mengyuan Liu and Yixiao Chen and Anning Tian and Xinmeng Wu and Mozhi Shen and Tianchou Gong and Jeongkyu Lee }, journal={arXiv preprint arXiv:2504.04066}, year={ 2025 } }