Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Ethical issues of implementing artificial intelligence in medicine
2
Zitationen
1
Autoren
2023
Jahr
Abstract
Artificial intelligence (AI) systems are highly efficient. However, their implementation in medical practice is accompanied by a range of ethical issues. The black box problem is basic to the AI philosophy, although having its own specificity in relation to medicine. A selection of relevant papers for the last three years by citations and their analysis through PubMed and Google Scholar search engines was conducted to study the problems of the AI implementation in medicine. One of the central problems is that the algorithms to justify decisions are still unclear to doctors and patients. The lack of clear and reasonable principles of AI operation is called the black box problem. How can doctors rely on AI findings without enough data to explain a particular decision? Who will be responsible for the final decision in case of an adverse outcome (death or serious injury)? In routine practice, medical decisions are based on an integrative approach (understanding of pathophysiology and biochemistry and interpretation of past findings), clinical trials and cohort studies. AI may be used to build a plan for disease diagnosis and treatment, while not providing a convincing justification for specific decisions. This creates a black box, since the information that the AI considers important for making a conclusion is not always clear, nor is it clear how or why the AI reaches that conclusion. Thus, Juan M. Durn writes, Even if we claim to understand the principles underlying AI annotation and training, it is still difficult and often even impossible to understand the inner workings of such systems. The doctor can interpret or verify the results of these algorithms, but cannot explain how the algorithm arrived at its recommendations or diagnosis. Currently, AI models are trained to recognize microscopic adenomas and polyps in the colon. However, doctors still have insufficient understanding of how AI differentiates between different types of polyps despite the high accuracy, and the signs that are key to making an AI diagnosis remain unclear to experienced endoscopists. Another example is the biomarkers of colorectal cancer recognized by AI. The doctor does not know how algorithms determine the quantitative and qualitative criteria of detectable biomarkers to formulate a final diagnosis in each individual case, i.e., a black box of process pathology emerges. For the trust of doctors and patients to be earned, the processes underlying the work of AI must be deciphered and explained, describing how it is done sequentially, step by step, and a specific result is to be formulated. Although the black box algorithms cannot be called transparent, the possibility of applying these technologies in practical medicine is worth considering. Despite the above problems, the accuracy and efficiency of solutions does not allow to neglect the use of AI. On the contrary, this use is necessary. Emerging problems should serve as a basis for training and educating doctors to work with AI, expanding the scope of application and developing new diagnostic techniques.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.291 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.143 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.535 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.452 Zit.