Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Bringing Machine Learning Systems into Clinical Practice: A Design Science Approach to Explainable Machine Learning-Based Clinical Decision Support Systems
21
Zitationen
4
Autoren
2023
Jahr
Abstract
Clinical decision support systems (CDSSs) based on machine learning (ML) hold great promise for improving medical care. Technically, such CDSSs are already feasible but physicians have been skeptical about their application. In particular, their opacity is a major concern, as it may lead physicians to overlook erroneous outputs from ML-based CDSSs, potentially causing serious consequences for patients. Research on explainable AI (XAI) offers methods with the potential to increase the explainability of black-box ML systems. This could significantly accelerate the application of MLbased CDSSs in medicine. However, XAI research to date has mainly been technically driven and largely neglects the needs of end users. To better engage the users of ML-based CDSSs, we applied a design science approach to develop a design for explainable ML-based CDSSs that incorporates insights from XAI literature while simultaneously addressing physicians’ needs. This design comprises five design principles that designers of ML-based CDSSs can apply to implement user-centered explanations, which are instantiated in a prototype of an explainable ML-based CDSS for lung nodule classification. We rooted the design principles and the derived prototype in a body of justificatory knowledge consisting of XAI literature, the concept of usability, and an online survey study involving 57 physicians. We refined the design principles and their instantiation by conducting walk-throughs with six radiologists. A final experiment with 45 radiologists demonstrated that our design resulted in physicians perceiving the ML-based CDSS as more explainable and usable in terms of the required cognitive effort than a system without explanations.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.284 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.233 Zit.
"Why Should I Trust You?"
2016 · 14.179 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.096 Zit.