Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Why we cannot trust artificial intelligence in medicine
60
Zitationen
2
Autoren
2019
Jahr
Abstract
George Orwell said, “if thought corrupts language, language can also corrupt thought.”1Orwell G Politics and the English Language.in: Orwell S Angos I The Collected Essays, Journalism and Letters of George Orwell. Harcourt, Brace, Javanovich, New York1968: 127-140Google Scholar Orwell's worry about the totalitarian regimes of his day offers broader insights about how language can obscure thinking and values, including about contemporary medical applications of artificial intelligence (AI). The potential of AI to revolutionise medicine appears vast.2Topol EJ High-performance medicine: the convergence of human and artificial intelligence.Nat Med. 2019; 25: 44-56Crossref PubMed Scopus (1713) Google Scholar Nevertheless, concerns over the unknown and unknowable so-called black boxes of AI have spurred a movement toward building trust in AI. Although well intentioned, applying trust to AI is a category error, mistakenly assuming that AI belongs to a category of things that can be trusted. Trust implies entrustment, placing something of value in the responsibility of another in circumstances of vulnerability. The ideal patient–physician relationship reflects this definition. Patients place their health in physicians' voluntary responsibility and discretion and believe in physicians' benevolent motives. AI does not have voluntary agency and cannot be said to have motives or character. These arguments have significance beyond semantics. Promulgating trust in AI could erode a deeper, moral sense of trust. Were we to conflate trust with mere reliability and accuracy, as the performance of AI improves, this could decrease trust in physicians whose technical accuracy might end up being inferior to machines. Trust properly understood involves human thoughts, motives, and actions that lie beyond technical, mechanical characteristics. To sacrifice these elements of trust corrupts our thinking and values. It limits our imaginations about the meaning of the patient–physician relationship when promoting humane, personalised care seems ever more crucial. Embracing trust in AI as if AI were a moral agent also unwittingly fosters diffusion of responsibility.3Nissenbaum H Accountability in a computerized society.Sci Eng Ethics. 1996; 2: 25-42Google Scholar Absolving physicians of blame in times of error while muting praise for wise decisions takes medicine in the wrong direction. Although AI, like a faulty surgical instrument, might be causally implicated, we cannot rightly assign moral responsibility to it. Whether future versions of AI can be regarded as moral agents is only a matter of speculation.4Char DS Shah NH Magnus D Implementing machine learning in health care—addressing ethical challenges.N Engl J Med. 2018; 378: 981-983Crossref PubMed Scopus (437) Google Scholar Although in common parlance we certainly speak of trusting machines—eg, trusting cars to get us places—we ought not confuse these colloquialisms with the true meaning of trust. Preserving precision in the usage of trust strikes at the heart of the identity of medicine. As AI increasingly becomes a part of medicine, its proper role should be in supporting effective, empathic, and ethically attentive care for humans, by humans. Instead of trusting AI, we should aspire for medicine that warrants placing our trust in each other. We declare no competing interests. Both authors contributed equally to the conception or design of the work; the acquisition, analysis, or interpretation of data for the work; drafting the work or revising it critically for important intellectual content; and final approval of the version to be published; and both agree to be accountable for all aspects of the work.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.