Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A Novel Machine Learning-Optimized Framework for Systematic Analysis of Foundation Models in Healthcare: Comprehensive Algorithm Optimization With Governance-Driven Predictive Modeling
0
Zitationen
2
Autoren
2025
Jahr
Abstract
This paper presents an innovative computational framework that combines systematic literature review methodology with machine learning techniques to analyze Foundation Models (FMs) deployment in healthcare systems. We developed a novel approach integrating PRISMA 2020 guidelines with Latent Dirichlet Allocation (LDA) topic modeling, analyzing 92 peer-reviewed studies (2021-2025). Through systematic benchmarking of eight optimization algorithms, Particle Swarm Optimization (PSO) emerged as optimal for LDA hyperparameter tuning, achieving superior topic coherence (fitness: -0.89546 on negative coherence scale, where lower values indicate better model performance). Our analysis revealed two dominant implementation paradigms: clinical practice applications (44.57%) and image-based diagnostic systems (55.43%). In the experimental extension, we developed a hybrid classification framework. This framework captures governance-related factors influencing FM adoption using both binary and multilabel approaches. Binary classification achieved AUC=0.956 with Logistic Regression, while multilabel classification (10 thematic clusters) using Gradient Boosting achieved Hamming loss of 0.071, revealing that 71% of papers exhibit multi-domain characteristics, with an overall average of 3.12 thematic cluster assignments per document across the entire 92-paper corpus. LIME-based interpretability revealed distinct regulatory patterns across application domains. Notably, while safety and bias concerns appear in >70% of studies, critical dimensions like accountability (8.7%) and patient-centered design (12.0%) remain underrepresented. The framework demonstrates robust performance across multiple independent runs, providing a replicable methodology for analyzing emerging AI technologies. All code and annotation guidelines are available upon request, supporting reproducibility and extension of this interdisciplinary approach.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.287 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.140 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.534 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.450 Zit.