Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A Methodology for Reliability Analysis of Explainable Machine Learning: Application to Endocrinology Diseases
4
Zitationen
4
Autoren
2024
Jahr
Abstract
Machine learning (ML) has transformed various sectors, including healthcare, by enabling the extraction of complex knowledge and predictions from vast datasets. However, the opacity of ML models, often referred to as “black boxes,” hinders their integration into medical practice. Explainable AI (XAI) has emerged as a crucial area for enhancing the transparency and understandability of ML model decisions, particularly in healthcare where reliability and accuracy are paramount. However, the reliability of the explanations provided by ML models remains a major challenge. This mainly concerns the difficulty of maintaining the validity and relevance of the new training and test data explanations. In this study, we propose a structured approach to enhance and evaluate the reliability of explanations provided by ML models in healthcare. We aim to improve the reliability of explainability by combining the XAI approaches with the k-fold technique. We then developed several metrics to assess the generalizability, concordance, and stability of the combined XAI and k-fold approach, which we applied to case studies on hypothyroidism and diabetes risk prediction using SHAP and LIME frameworks. Our findings reveal that the SHAP approach combined with k-fold exhibits superior generalizability, stability, and concordance compared to the combination of LIME with k-fold. SHAP and k-fold integration provide reliable explanations for hypothyroidism and diabetes predictions, providing strong concordance with the internal explainability of the random forest model, the best generalizability, and good stability. This structured approach can bolster practitioner’s confidence in ML models and facilitate their adoption in healthcare settings.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.284 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.233 Zit.
"Why Should I Trust You?"
2016 · 14.179 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.096 Zit.