Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable AI for medical imaging: Explaining pneumothorax diagnoses\n with Bayesian Teaching
3
Zitationen
4
Autoren
2021
Jahr
Abstract
Limited expert time is a key bottleneck in medical imaging. Due to advances\nin image classification, AI can now serve as decision-support for medical\nexperts, with the potential for great gains in radiologist productivity and, by\nextension, public health. However, these gains are contingent on building and\nmaintaining experts' trust in the AI agents. Explainable AI may build such\ntrust by helping medical experts to understand the AI decision processes behind\ndiagnostic judgements. Here we introduce and evaluate explanations based on\nBayesian Teaching, a formal account of explanation rooted in the cognitive\nscience of human learning. We find that medical experts exposed to explanations\ngenerated by Bayesian Teaching successfully predict the AI's diagnostic\ndecisions and are more likely to certify the AI for cases when the AI is\ncorrect than when it is wrong, indicating appropriate trust. These results show\nthat Explainable AI can be used to support human-AI collaboration in medical\nimaging.\n
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.676 Zit.
Generative Adversarial Nets
2023 · 19.895 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.318 Zit.
"Why Should I Trust You?"
2016 · 14.522 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.191 Zit.