Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Influences on User Trust in Healthcare Artificial Intelligence (HAI): A Systematic Review (Preprint)
1
Zitationen
4
Autoren
2021
Jahr
Abstract
<sec> <title>BACKGROUND</title> Artificial Intelligence (AI) is becoming increasingly prominent in domains such as healthcare. It is argued to be transformative through altering the way in which healthcare data is used as well as tackling rising costs and staff shortages. The realisation and success of AI depends heavily on people’s trust in its applications. Yet, the influences on trust in AI applications in healthcare so far have been underexplored </sec> <sec> <title>OBJECTIVE</title> The objective of this study was to identify aspects (related to users, the AI application and the wider context) influencing trust in healthcare AI (HAI). </sec> <sec> <title>METHODS</title> We performed a systematic review to map out influences on user trust in HAI. To identify relevant studies, we searched 7 electronic databases in November 2019 (ACM digital library, IEEE Explore, NHS Evidence, Ovid ProQuest Dissertations & Thesis Global, Ovid PsycINFO, PubMed, Web of Science Core Collection). Searches were restricted to publications available in English and German with no publication date restriction. To be included studies had to be empirical; focus on an AI application (excluding robotics) in a health-related setting; and evaluate applications with regards to users. </sec> <sec> <title>RESULTS</title> Overall, 3 studies, one mixed-method and 2 qualitative studies in English were included. Influences on trust fell into three broad categories: human-related (knowledge, expectation, mental model, self-efficacy, type of user, age, gender), AI-related (data privacy and safety, operational safety, transparency, design, customizability, trialability, explainability, understandability, power-control-balance, benevolence) and related to wider context (AI company, media, social network of the user). The factors resulted in an updated logic model illustrating the relationship between these aspects. </sec> <sec> <title>CONCLUSIONS</title> Trust in healthcare AI depends on a variety of factors, both external and internal to the AI application. This study contributes to our understanding of what influences trust in HAI by highlighting key influences as well as pointing to gaps and issues in existing research on trust and AI. In so doing, it offers a starting point for further investigation of trust environments as well as trustworthy AI applications. </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.292 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.143 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.539 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.452 Zit.