Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Reevaluating feature importance in machine learning: concerns regarding SHAP interpretations in the context of the EU artificial intelligence act
13
Zitationen
1
Autoren
2025
Jahr
Abstract
This paper critically examines the analysis conducted by Maußner et al. on AI analysis, particularly their interpretation of feature importances derived from various machine learning models using SHAP (SHapley Additive exPlanations). Although SHAP aids in interpretability, it is subject to model-specific biases that can misrepresent relationships between variables. The paper emphasizes the lack of ground truth values in feature importance assessments and calls for careful consideration of statistical methodologies, including robust nonparametric approaches. By advocating for the use of Spearman's correlation with p-values and Kendall's tau with p-values, this work aims to strengthen the integrity of findings in machine learning studies, ensuring that conclusions drawn are reliable and actionable.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.452 Zit.
Generative Adversarial Nets
2023 · 19.843 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.258 Zit.
"Why Should I Trust You?"
2016 · 14.307 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.136 Zit.