Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Co‐designing diagnosis: Towards a responsible integration of Machine Learning decision‐support systems in medical diagnostics
33
Zitationen
2
Autoren
2021
Jahr
Abstract
RATIONALE: This paper aims to show how the focus on eradicating bias from Machine Learning decision-support systems in medical diagnosis diverts attention from the hermeneutic nature of medical decision-making and the productive role of bias. We want to show how an introduction of Machine Learning systems alters the diagnostic process. Reviewing the negative conception of bias and incorporating the mediating role of Machine Learning systems in the medical diagnosis are essential for an encompassing, critical and informed medical decision-making. METHODS: This paper presents a philosophical analysis, employing the conceptual frameworks of hermeneutics and technological mediation, while drawing on the case of Machine Learning algorithms assisting doctors in diagnosis. This paper unravels the non-neutral role of algorithms in the doctor's decision-making and points to the dialogical nature of interaction not only with the patients but also with the technologies that co-shape the diagnosis. FINDINGS: Following the hermeneutical model of medical diagnosis, we review the notion of bias to show how it is an inalienable and productive part of diagnosis. We show how Machine Learning biases join the human ones to actively shape the diagnostic process, simultaneously expanding and narrowing medical attention, highlighting certain aspects, while disclosing others, thus mediating medical perceptions and actions. Based on that, we demonstrate how doctors can take Machine Learning systems on board for an enhanced medical diagnosis, while being aware of their non-neutral role. CONCLUSIONS: We show that Machine Learning systems join doctors and patients in co-designing a triad of medical diagnosis. We highlight that it is imperative to examine the hermeneutic role of the Machine Learning systems. Additionally, we suggest including not only the patient, but also colleagues to ensure an encompassing diagnostic process, to respect its inherently hermeneutic nature and to work productively with the existing human and machine biases.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.646 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.554 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.071 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.851 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.