Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Accuracy is not all you need! The Reasons to Require AI Explainability
0
Zitationen
1
Autoren
2026
Jahr
Abstract
Do we need explanations of AI outputs in order to use AI systems (in high-risk settings)? This question has been actively debated recently, with one group denying that explanations are needed as long as the AI system is sufficiently accurate. What matters, according to them, is that outcomes improve. The other group argues that we have procedural reasons, centered around autonomy and self-advocacy, which trump outcome-based arguments to the contrary. I here present a set of arguments to show that outcome-based arguments should in fact also favor explainability for many of the current systems, as challenges with human oversight and accountability often lead to worse overall outcomes even if a more accurate AI system is integrated. Critics of explainability overlooked the fact that AI operates within a broader socio-technical system, and its accuracy alone tells us little of the final outcomes. In addition, I consolidate the procedural arguments and present a view of the upshot of these arguments. On this, we should avoid applications of AI that largely replace decision-making (relegating humans to the position of checking outputs). We can, however, use AI in other roles even for high-risk decision making while conforming to all of the requirements set by both outcome-based and procedural arguments. What matters, in the end, is the ability to explain decisions, and with the right role for AI that is possible even when supported by opaque systems.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.284 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.233 Zit.
"Why Should I Trust You?"
2016 · 14.179 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.096 Zit.