Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A Study Comparing Explainability Methods: A Medical User Perspective
0
Zitationen
3
Autoren
2025
Jahr
Abstract
Abstract In recent years, we have witnessed the rapid development of artificial intelligence systems and their presence in various fields. These systems are very efficient and powerful, but often unclear and insufficiently transparent. Explainable artificial intelligence (XAI) methods try to solve this problem. XAI is still a developing area of research, but it already has considerable potential for improving the transparency and trustworthiness of AI models. Thanks to XAI, we can build more responsible and ethical AI systems that better serve people’s needs. The aim of this study is to focus on the role of the user. Part of the work is a comparison of several explainability methods such as LIME, SHAP, ANCHORS and PDP on a selected data set from the field of medicine. The comparison of individual explainability methods from various aspects was carried out using a user study.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.374 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.244 Zit.
"Why Should I Trust You?"
2016 · 14.261 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.126 Zit.