Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AI-Driven Diagnostic Modelling Frameworks for Enhancing Accuracy and Privacy Protection in U.S. Healthcare Analytics Systems
0
Zitationen
2
Autoren
2026
Jahr
Abstract
This study examined AI-Driven Diagnostic Modeling Frameworks for Enhancing Accuracy and Privacy Protection in U.S. Healthcare Analytics Systems using a framework-level quantitative approach that integrated predictive performance, calibration reliability, and privacy-risk evaluation. Structured evidence mapping process reviewed 72 peer-reviewed papers to define constructs, metrics, and privacy threat considerations, after which a retrospective multi-site design was implemented to compare centralized non-private modeling, differentially private training, federated learning, and hybrid privacy configurations under standardized cohort rules and leakage-resistant validation. The analytic dataset included 48,620 adult patients contributing 162,904 encounters across three health systems, with a mean age of 57.8 years (SD = 16.4) and 52.6% female representation. Median encounter density was 3.0 encounters per patient (IQR = 2.0–5.0), and 18.9% of patients were classified as low-contact (≤1 encounter during the lookback window). Data completeness varied by domain, with missingness of 6.8% for vital signs, 18.4% for core laboratories, and 9.6% for medication indicators. Overall diagnostic outcome prevalence was 8.6%, ranging from 7.5% to 9.6% across sites. Correlation analysis indicated a strong relationship between encounter density and measurement frequency (r = 0.58) and a moderate association between comorbidity burden and outcome occurrence (r = 0.34). Collinearity diagnostics showed elevated redundancy among utilization predictors, including a variance inflation factor of 6.8 for total encounters and 5.9 for inpatient admissions, supporting composite consolidation before regression. Multivariable regression showed that differentially private training was associated with a modest reduction in discrimination (ΔAUC = −0.012) and increased calibration error (+0.021), while federated learning showed minimal average discrimination change (ΔAUC = −0.004) but greater cross-site dispersion. Privacy-risk evaluation indicated reduced membership inference leakage under privacy-preserving configurations, with leakage reductions of −0.083 for differentially private training and −0.071 for hybrid training relative to baseline. Overall, accuracy and privacy outcomes co-varied as system-level properties shaped by data quality, institutional heterogeneity, and framework design choices.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.214 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.071 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.429 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.418 Zit.