Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Factors influencing trust in medical artificial intelligence for healthcare professionals: a narrative review
106
Zitationen
3
Autoren
2021
Jahr
Abstract
Objective: We performed a comprehensive review of the literature to better understand the trust dynamics between medical artificial intelligence (AI) and healthcare expert end-users. We explored the factors that influence trust in these technologies and how they compare to established concepts of trust in the engineering discipline. By identifying the qualitatively and quantitatively assessed factors that influence trust in medical AI, we gain insight into understanding how autonomous systems can be optimized during the development phase to improve decision-making support and clinician-machine teaming. This facilitates an enhanced understanding of the qualities that healthcare professional users seek in AI to consider it trustworthy. We also highlight key considerations for promoting on-going improvement of trust in autonomous medical systems to support the adoption of medical technologies into practice. Background: Artificially intelligent technology is revolutionizing healthcare. However, lack of trust in the output of such complex decision support systems introduces challenges and barriers to adoption and implementation into clinical practice. Methods: We searched databases including, Ovid MEDLINE, Ovid EMBASE, Clarivate Web of Science, and Google Scholar, as well as gray literature, for publications from 2000 to July 15, 2021, that reported features of AI-based diagnostic and clinical decision support systems that contribute to enhanced end-user trust. Papers discussing implications and applications of medical AI in clinical practice were also recorded. Results were based on the quantity of papers that discussed each trust concept, either quantitatively or qualitatively, using frequency of concept commentary as a proxy for importance of a respective concept. Conclusions: Explainability, transparency, interpretability, usability, and education are among the key identified factors thought to influence a healthcare professionals’ trust in medical AI and enhance clinician-machine teaming in critical decision-making healthcare environments. We also identified the need to better evaluate and incorporate other critical factors to promote trust by consulting medical professionals when developing AI systems for clinical decision-making and diagnostic support.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.197 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.047 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.410 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.