Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
S2529 Harnessing Deep Learning for Precision in Liver Tumor Segmentation: A Meta-Analysis of Performance Across Benchmark CT Datasets
0
Zitationen
6
Autoren
2025
Jahr
Abstract
Introduction: Deep learning has emerged as a transformative force in medical imaging. For liver tumors—a significant burden in hepatology—accurate segmentation on CT imaging is critical. Manual segmentation is time-intensive and error-prone; artificial intelligence (AI) offers a promising alternative. Methods: A comprehensive search of PubMed, Cochrane Library, Embase, and Web of Science was conducted for studies published between May 2017 and April 2024, following the PRISMA guidelines. Quality assessment of these studies was conducted using the CLAIM and QUADAS-2 tools to evaluate applicability and bias. The models in those studies selected for meta-analysis were trained and validated on either the MICCAI LiTS 2017 or 3DIRCADb datasets. These models' reported dice similarity coefficients (DSC) were utilized for meta-analysis. Factors affecting algorithm performance were investigated. Wilcoxon Signed-Rank Test was conducted to compare the distribution among 2 datasets. Results: Out of 224 identified studies, 41 were included in the meta-analysis, with 31 models evaluated on the LiTS 2017 dataset and 25 on 3DIRCADb. The top-performing algorithms achieved Dice Similarity Coefficients (DSC) of 0.846 (SD ≈ 0.078, IQR ≈ 0.103) on LiTS 2017 and 0.827 (SD ≈ 0.071, IQR ≈ 0.070) on 3DIRCADb. A Wilcoxon signed-rank test yielded a P-value of 0.064. Among models tested on both datasets, SADSNet and MAPFUNet performed best. DenseUNet and Spatial UNet outperformed Cascaded UNet and ResUNet, while DCNN and FCN variants surpassed traditional CNNs. Specialized models like CEDRNN and GCN showed strong results but were only evaluated on LiTS 2017. Conclusion: AI-driven segmentation demonstrates high accuracy and reliability, suggesting its readiness for clinical integration in hepatology workflows. With further development and validation, such tools could reduce inter-observer variability, accelerate diagnostic timelines, and support personalized treatment planning in liver cancer management.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.402 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.270 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.702 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.507 Zit.