Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Benchmarking Explainable AI Methods for Tabular Healthcare Data:Towards Standardized and Clinically Interpretable Evaluation Frameworks
0
Zitationen
2
Autoren
2025
Jahr
Abstract
Artificial Intelligence (AI) is becoming more common in healthcare, but clinical confidence in AI is limited by black-box models. This research study evaluates Explainable AI (XAI) on the Pima Indians Diabetes database using Logistic Regression and Random Forest,coupled with SHAP,LIME,and Permutation Feature Importance (PFI) methods. Overall,Random Forest performance yielded an accuracy score of 84 percent and an ROC-AUC score of 0.80, while glucose,BMI,and age were consistently important predictors. SHAP identifies patterns. LIME explored the patterns for individuals and PFI confirmed importance scoring. In comparison to previous studies that have evaluated explainers in isolation,this study offers a framework that combined methods to improve on both accuracy and interpretability,and establish more meaningful, transparent and clinically-useful AI systems.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.253 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.230 Zit.
"Why Should I Trust You?"
2016 · 14.156 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.093 Zit.