Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Outlining the Design Space of Explainable Intelligent Systems for\n Medical Diagnosis
27
Zitationen
3
Autoren
2019
Jahr
Abstract
The adoption of intelligent systems creates opportunities as well as\nchallenges for medical work. On the positive side, intelligent systems have the\npotential to compute complex data from patients and generate automated\ndiagnosis recommendations for doctors. However, medical professionals often\nperceive such systems as black boxes and, therefore, feel concerned about\nrelying on system generated results to make decisions. In this paper, we\ncontribute to the ongoing discussion of explainable artificial intelligence\n(XAI) by exploring the concept of explanation from a human-centered\nperspective. We hypothesize that medical professionals would perceive a system\nas explainable if the system was designed to think and act like doctors. We\nreport a preliminary interview study that collected six medical professionals'\nreflection of how they interact with data for diagnosis and treatment purposes.\nOur data reveals when and how doctors prioritize among various types of data as\na central part of their diagnosis process. Based on these findings, we outline\nfuture directions regarding the design of XAI systems in the medical context.\n
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.988 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.368 Zit.
"Why Should I Trust You?"
2016 · 14.740 Zit.
Generative adversarial networks
2020 · 13.342 Zit.