Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A moral analysis of intelligent decision-support systems in diagnostics through the lens of Luciano Floridi’s information ethics
16
Zitationen
1
Autoren
2021
Jahr
Abstract
Abstract Contemporary medical diagnostics has a dynamic moral landscape, which includes a variety of agents, factors, and components. A significant part of this landscape is composed of information technologies that play a vital role in doctors’ decision-making. This paper focuses on the so-called Intelligent Decision-Support System that is widely implemented in the domain of contemporary medical diagnosis. The purpose of this article is twofold. First, I will show that the IDSS may be considered a moral agent in the practice of medicine today. To develop this idea I will introduce the approach to artificial agency provided by Luciano Floridi. Simultaneously, I will situate this approach in the context of contemporary discussions regarding the nature of artificial agency. It is argued here that the IDSS possesses a specific sort of agency, includes several agent features (e.g. autonomy, interactivity, adaptability), and hence, performs an autonomous behavior, which may have a substantial moral impact on the patient’s well-being. It follows that, through the technology of artificial neural networks combined with ‘deep learning’ mechanisms, the IDSS tool achieves a specific sort of independence (autonomy) and may possess a certain type of moral agency. Second, I will provide a conceptual framework for the ethical evaluation of the moral impact that the IDSS may have on the doctor’s decision-making and, consequently, on the patient’s wellbeing. This framework is the Object-Oriented Model of Moral Action developed by Luciano Floridi. Although this model appears in many contemporary discussions in the field of information and computer ethics, it has not yet been applied to the medical domain. This paper addresses this gap and seeks to reveal the hidden potentialities of the OOP model for the field of medical diagnosis.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.