OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 13.03.2026, 11:57

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

AI-Driven Diagnostic Modelling Frameworks for Enhancing Accuracy and Privacy Protection in U.S. Healthcare Analytics Systems

2026·0 Zitationen·American Journal of Scholarly Research and InnovationOpen Access
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2026

Jahr

Abstract

This study examined AI-Driven Diagnostic Modeling Frameworks for Enhancing Accuracy and Privacy Protection in U.S. Healthcare Analytics Systems using a framework-level quantitative approach that integrated predictive performance, calibration reliability, and privacy-risk evaluation. Structured evidence mapping process reviewed 72 peer-reviewed papers to define constructs, metrics, and privacy threat considerations, after which a retrospective multi-site design was implemented to compare centralized non-private modeling, differentially private training, federated learning, and hybrid privacy configurations under standardized cohort rules and leakage-resistant validation. The analytic dataset included 48,620 adult patients contributing 162,904 encounters across three health systems, with a mean age of 57.8 years (SD = 16.4) and 52.6% female representation. Median encounter density was 3.0 encounters per patient (IQR = 2.0–5.0), and 18.9% of patients were classified as low-contact (≤1 encounter during the lookback window). Data completeness varied by domain, with missingness of 6.8% for vital signs, 18.4% for core laboratories, and 9.6% for medication indicators. Overall diagnostic outcome prevalence was 8.6%, ranging from 7.5% to 9.6% across sites. Correlation analysis indicated a strong relationship between encounter density and measurement frequency (r = 0.58) and a moderate association between comorbidity burden and outcome occurrence (r = 0.34). Collinearity diagnostics showed elevated redundancy among utilization predictors, including a variance inflation factor of 6.8 for total encounters and 5.9 for inpatient admissions, supporting composite consolidation before regression. Multivariable regression showed that differentially private training was associated with a modest reduction in discrimination (ΔAUC = −0.012) and increased calibration error (+0.021), while federated learning showed minimal average discrimination change (ΔAUC = −0.004) but greater cross-site dispersion. Privacy-risk evaluation indicated reduced membership inference leakage under privacy-preserving configurations, with leakage reductions of −0.083 for differentially private training and −0.071 for hybrid training relative to baseline. Overall, accuracy and privacy outcomes co-varied as system-level properties shaped by data quality, institutional heterogeneity, and framework design choices.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationPrivacy-Preserving Technologies in DataEthics and Social Impacts of AI
Volltext beim Verlag öffnen