Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Can Artificial Intelligence Tools Operate Transparently in Medicine?
4
Zitationen
1
Autoren
2022
Jahr
Abstract
To the Editor: I enjoyed the article by Lee and colleagues on artificial intelligence (AI) in undergraduate medical education. 1 This is indeed an emerging and important area for educationalists. 2 Although the article does touch on the risk of applying AI in health care, a couple of points warrant further discussion—namely that AI tools should operate transparently and that clinicians may be replaced by AI. Most applications of AI in health care involve disease diagnosis based on patient data or imagery. The complexity of the requisite data analysis rapidly leads to highly nontransparent algorithms, particularly when machine learning and neural networks are used. However, it can be difficult for clinicians to be sure that an AI diagnosis is correct or reliable without an understanding of the steps involved. Recent attempts to create so-called explainable AI systems address this problem by requiring the AI algorithm to highlight the parts of a medical image involved in the diagnosis, hence yielding some transparency in the computational process. However, AI does not use diagnostic rules in the human sense and has no contextual understanding of what it is doing or what its diagnosis means, hence it cannot offer any genuine explanation of how it has arrived at a diagnosis. This can occasionally lead to catastrophically incorrect or bizarre results. A recent example was an AI system that assessed patients with pneumonia and asthma to be at a lower risk of complications than patients with pneumonia alone. 3 Those developing the AI determined that the AI had reached this incorrect conclusion because patients who were admitted with both pneumonia and asthma typically went straight to the intensive care unit, hence experienced fewer complications, but there was no way of including this contextual information in the AI’s algorithm. Given the complexity of AI algorithms and their inability to deal with the context of patient care (at least for the foreseeable future), it is hard to see how AI tools can operate transparently. It is also difficult to imagine that such AI systems could replace human clinicians any more than a Google search could.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.316 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.177 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.575 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.468 Zit.