Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Acceptance of Medical Treatment Regimens Provided by AI vs. Human
29
Zitationen
4
Autoren
2021
Jahr
Abstract
Along with the increasing development of information technology, the interaction between artificial intelligence and humans is becoming even more frequent. In this context, a phenomenon called “medical AI aversion” has emerged, in which the same behaviors of medical AI and humans elicited different responses. Medical AI aversion can be understood in terms of the way that people attribute mind capacities to different targets. It has been demonstrated that when medical professionals dehumanize patients—making fewer mental attributions to patients and, to some extent, not perceiving and treating them as full human—it leads to more painful and effective treatment options. From the patient’s perspective, will painful treatment options be unacceptable when they perceive the doctor as a human but disregard his or her own mental abilities? Is it possible to accept a painful treatment plan because the doctor is artificial intelligence? Based on the above, the current study investigated the above questions and the phenomenon of medical AI aversion in a medical context. Through three experiments it was found that: (1) human doctor was accepted more when patients were faced with the same treatment plan; (2) there was an interactional effect between the treatment subject and the nature of the treatment plan, and, therefore, affected the acceptance of the treatment plan; and (3) experience capacities mediated the relationship between treatment provider (AI vs. human) and treatment plan acceptance. Overall, this study attempted to explain the phenomenon of medical AI aversion from the mind perception theory and the findings are revealing at the applied level for guiding the more rational use of AI and how to persuade patients.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.456 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.332 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.779 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.533 Zit.