Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
In Defense of Post Hoc Explanations in Medical AI
0
Zitationen
3
Autoren
2026
Jahr
Abstract
Since the early days of the explainable artificial intelligence movement, post hoc explanations have been praised for their potential to improve user understanding, promote trust, and reduce patient-safety risks in black box medical AI systems. Recently, however, critics have argued that the benefits of post hoc explanations are greatly exaggerated since they merely approximate, rather than replicate, the actual reasoning processes that black box systems take to arrive at their outputs. In this paper, we aim to defend the value of post hoc explanations against this recent critique. We argue that even if post hoc explanations do not replicate the exact reasoning processes of black box systems, they can still improve users' functional understanding of black box systems, increase the accuracy of clinician-AI teams, and assist clinicians in justifying their AI-informed decisions. While post hoc explanations are not a silver-bullet solution to the black box problem in medical AI, they remain a useful strategy for addressing it.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 21.035 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.378 Zit.
"Why Should I Trust You?"
2016 · 14.785 Zit.
Generative adversarial networks
2020 · 13.374 Zit.