OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 20.04.2026, 13:22

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Explainable AI for medical imaging: Explaining pneumothorax diagnoses\n with Bayesian Teaching

2021·3 Zitationen·arXiv (Cornell University)Open Access
Volltext beim Verlag öffnen

3

Zitationen

4

Autoren

2021

Jahr

Abstract

Limited expert time is a key bottleneck in medical imaging. Due to advances\nin image classification, AI can now serve as decision-support for medical\nexperts, with the potential for great gains in radiologist productivity and, by\nextension, public health. However, these gains are contingent on building and\nmaintaining experts' trust in the AI agents. Explainable AI may build such\ntrust by helping medical experts to understand the AI decision processes behind\ndiagnostic judgements. Here we introduce and evaluate explanations based on\nBayesian Teaching, a formal account of explanation rooted in the cognitive\nscience of human learning. We find that medical experts exposed to explanations\ngenerated by Bayesian Teaching successfully predict the AI's diagnostic\ndecisions and are more likely to certify the AI for cases when the AI is\ncorrect than when it is wrong, indicating appropriate trust. These results show\nthat Explainable AI can be used to support human-AI collaboration in medical\nimaging.\n

Ähnliche Arbeiten

Autoren

Themen

Explainable Artificial Intelligence (XAI)Clinical Reasoning and Diagnostic SkillsArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen