Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable Machine Learning, Patient Autonomy, and Clinical Reasoning
2
Zitationen
2
Autoren
2022
Jahr
Abstract
Abstract Clinical decision support systems based on complex machine learning models render opaque the rationale and value commitments that underpin diagnoses and suggested treatments. This creates a tension with the prevailing view in medical ethics, which emphasizes patients making autonomous decisions based on an understanding of relevant medical evidence alongside their beliefs and values. Calls for algorithmic explainability in clinical settings are partly motivated by this tension. The question is what needs to be explained, to whom, in what way, and when, to integrate machine learning systems into the clinical process in a way that is consistent with patient-centred decision-making. In this chapter, the authors review the tensions and argue that answers to these questions depend on more fundamental issues in the philosophy of medicine regarding the logic of clinical reasoning. The authors outline and defend a broadly Peircean account which, they argue, captures ethically salient aspects of the interplay between clinicians and decision support systems, and use it to shed light on the particulars of the explainability challenge.
Ähnliche Arbeiten
The Strengths and Difficulties Questionnaire: A Research Note
1997 · 14.537 Zit.
Making sense of Cronbach's alpha
2011 · 13.683 Zit.
QUADAS-2: A Revised Tool for the Quality Assessment of Diagnostic Accuracy Studies
2011 · 13.549 Zit.
A method for estimating the probability of adverse drug reactions
1981 · 11.454 Zit.
Evidence-Based Medicine
1992 · 4.135 Zit.