Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Guaranteeing Correctness in Black-Box Machine Learning: A Fusion of Explainable AI and Formal Methods for Healthcare Decision-Making
38
Zitationen
6
Autoren
2024
Jahr
Abstract
In recent years, Explainable Artificial Intelligence (XAI) has attracted considerable attention from the research community, primarily focusing on elucidating the opaque decision-making processes inherent in complex black-box machine learning systems such as deep neural networks. This spike in interest originates from the widespread adoption of black-box models, particularly in critical domains like healthcare and fraud detection, highlighting the pressing need to understand and validate their decision-making mechanisms rigorously. In addition, prominent XAI techniques, including LIME (Local Interpretable Model-Agnostic Explanations) and SHAP (Shapley Additive exPlanations), rely on heuristics and cannot guarantee the correctness of the explanations provided. This article systematically addresses this critical issue associated with machine learning and deep learning models, underscoring XAI’s pivotal role in promoting model transparency to enhance decision-making quality. Furthermore, this study advocates integrating Formal Methods to provide correctness guarantees for black-box internal decision-making. The proposed methodology unfolds in three pivotal stages: firstly, training black-box models using neural networks to generate synthetic datasets; secondly, employing LIME and SHAP techniques to interpret the models and visualize their internal decision-making processes; and finally, training decision trees on the synthetic datasets to implement Formal Methods for ensuring the correctness of the black-box model’s decision-making. To validate this proposed approach, experimentation was conducted on four widely recognized medical datasets, including the Wisconsin Breast Cancer and Thyroid Cancer (TC) datasets, which are available in the UCI Machine Learning Repository. Specifically, this research represents a significant contribution by pioneering a novel approach that seamlessly integrates XAI and Formal Methods, thereby furnishing correctness guarantees for internal decision-making processes within the healthcare domain.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.246 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.228 Zit.
"Why Should I Trust You?"
2016 · 14.150 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.091 Zit.