Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Transforming healthcare with AI: a systematic review of predictive modeling for early disease detection and management
1
Zitationen
3
Autoren
2025
Jahr
Abstract
Purpose This study aims to systematically review artificial intelligence (AI)-driven predictive modeling in healthcare, with a focus on recent advancements in early disease detection and management. It also identifies research gaps, validation methods, performance metrics, stakeholder insights and implementation challenges. Design/methodology/approach A structured search of academic databases yielded 24 peer-reviewed studies published between 2015 and 2024. After screening for relevance and methodological rigor, 24 studies were selected for in-depth analysis. Asia accounted for the largest share, contributing 13 studies (62.5%), reflecting the region’s growing investment in digital health infrastructure and AI research, particularly in countries like China, India and Pakistan. Europe followed with 5 studies (16.7%), indicating a strong academic and institutional interest in AI-enabled biomedical research across countries such as Greece, the UK and Germany. North America contributed 6 studies (20.8%), primarily from the USA, which continues to play a leading role in AI innovation through academic, clinical and industry-led initiatives. Findings The majority of studies used AI techniques such as Convolutional Neural Networks, Support Vector Machines and hybrid models, achieving high diagnostic performance. Reported accuracies ranged from 88% to 98%, with AUC-ROC scores exceeding 0.90 in 72% of the studies. Precision and recall values commonly surpassed 0.85, reflecting robust predictive capabilities. However, only 33% of studies reported external validation, and less than 25% addressed model interpretability. Stakeholder feedback emphasized usability, data security and the need for transparent algorithmic decision-making. Despite technical success, barriers such as workflow integration, ethical concerns and lack of training persist. Originality/value This systematic literature review provides a structured and evidence-based synthesis of recent advancements and existing limitations in AI-driven predictive modeling within healthcare. By systematically analyzing 24 peer-reviewed studies, the review quantifies performance metrics, validation practices and stakeholder considerations across diverse geographical contexts. It uniquely contributes by bridging technical evaluation with practical implementation insights, underscoring the urgent need for standardized validation frameworks, transparent and interpretable models and interdisciplinary collaboration. The review offers actionable guidance for researchers, clinicians and policymakers seeking to responsibly integrate AI into clinical workflows and enhance patient care outcomes through data-driven decision-making.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.