Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
An Empirical Evaluation of AI Deep Explainable Tools
59
Zitationen
5
Autoren
2020
Jahr
Abstract
Success in machine learning has led to a wealth of Artificial Intelligence (AI) systems. A great deal of attention is currently being set on the development of advanced Machine Learning (ML)-based solutions for a variety of automated predictions and classification tasks in a wide array of industries. However, such automated applications may introduce bias in results, making it risky to use these ML models in security-and privacy-sensitive domains. The prediction should be accurate and models have to be interpretable/explainable to understand how they work. In this research, we conduct an empirical evaluation of two major explainer/interpretable methods called LIME and SHAP on two datasets using deep learning models, including Artificial Neural Network (ANN) and Convolutional Neural Network (CNN). The results demonstrated that SHAP performs slightly better than LIME in terms of Identity, Stability, and Separability from two different datasets (Breast Cancer Wisconsin (Diagnostic) and NIH Chest X-Ray) that we used.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.253 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.230 Zit.
"Why Should I Trust You?"
2016 · 14.156 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.093 Zit.