Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Advancing Explainable and Secure Machine Learning for Decision Support in U.S. Regulated Systems
0
Zitationen
2
Autoren
2023
Jahr
Abstract
Machine learning systems are increasingly deployed in U.S. regulated decision-support environments where predictive outputs must be accurate, interpretable, secure, privacy-preserving, and auditable. This study advanced an integrated quantitative assurance framework for evaluating explainable and secure machine learning in regulated contexts. Guided by a structured review of 118 peer-reviewed studies, a multi-phase experimental design was implemented incorporating predictive benchmarking, explanation fidelity and stability testing, adversarial robustness assessment, privacy inference evaluation, and human-in-the-loop experimentation. Baseline predictive accuracy across model families averaged 0.84 (SD = 0.04), with ensemble models reaching 0.86. Calibration error decreased from 0.041 in unconstrained models to 0.028 in constrained configurations. Under adversarial simulation, baseline models experienced a 14.2 percentage-point degradation in performance, whereas robustness-enhanced models limited degradation to 6.5 percentage points. Privacy controls reduced membership inference attack success rates from 0.64 (SD = 0.05) to 0.52 (SD = 0.04), with a modest 1.7% reduction in discrimination performance. Explanation fidelity reached 0.92 (SD = 0.02) for intrinsically interpretable models compared to 0.85 (SD = 0.04) for post hoc methods, and explanation stability variance decreased from 0.08 to 0.03 under enhanced configurations. In human-in-the-loop evaluation (N = 312; 6,240 trials), structured explanations increased decision accuracy from 0.72 (SD = 0.09) to 0.79 (SD = 0.07), improved confidence calibration from 0.74 to 0.82, and increased selective override behavior from 18.4% to 24.7%, while response time rose from 34.1 to 40.8 seconds. Regression models explained 34% of variance in decision accuracy and 38% in confidence calibration. Findings demonstrated that integrated evaluation of explain ability, robustness, and privacy produced measurable improvements in predictive validity, interpretability reliability, security resilience, and human decision performance within regulated systems.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.253 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.230 Zit.
"Why Should I Trust You?"
2016 · 14.156 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.093 Zit.