Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
YOLOv11n-Based Deep Learning Approach for Detecting Fractures in Pediatric X-Rays
0
Zitationen
2
Autoren
2025
Jahr
Abstract
Fracture detection in pediatric wrist radiographs is challenging due to incomplete skeletal ossification, small bone structures, and subtle hairline fractures that are frequently missed in clinical practice, while growth plate radiolucency often mimics fracture appearance. This study evaluates YOLOv11n, a lightweight deep learning architecture with Spatial Pyramid Feature Fusion (SPFF) modules optimized for small-object detection, for automated pediatric wrist fracture identification. The model was trained and validated on the GRAZPEDWRI-DX benchmark dataset comprising 20,327 pediatric wrist radiographs (14,269 training, 4,048 validation, 2,010 test images) using transfer learning and conservative augmentation strategies. YOLOv11n achieved mAP@50 of 0.936 on validation and 0.945 on test sets, with precision of 0.905–0.918 and recall of 0.869–0.871, demonstrating improved accuracy compared to previous YOLOv8 implementations (mAP@50 ≈ 0.92) with 40–60% faster inference. End-to-end processing averaged 3.8 ms per image on NVIDIA Tesla T4 hardware, supporting real-time clinical applications. The mAP@50-95 of approximately 0.56 indicates reduced localization accuracy under stricter IoU criteria, primarily for hairline fractures. Error analysis revealed that 62% of false negatives were non-displaced hairline fractures, while 58% of false positives occurred near growth plate regions. YOLOv11n provides favorable balance between diagnostic accuracy and computational efficiency for pediatric fracture detection. However, prospective multi-institutional validation, integration of multi-view fusion strategies, and incorporation of age-specific anatomical priors are necessary before clinical deployment to enhance detection of subtle fracture presentations and reduce growth plate misclassifications.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.