Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Public Perceptions of Artificial Intelligence (AI) Use in Healthcare: A Scoping Review (Preprint)
0
Zitationen
14
Autoren
2025
Jahr
Abstract
<sec> <title>BACKGROUND</title> Artificial intelligence (AI) tools are developing at a rapid rate and are being incorporated into healthcare in a variety of ways. However, public and patient concerns about these tools are not being reported at the same volume or speed as these advancements, potentially hindering efforts to promote AI adoption in healthcare spaces. </sec> <sec> <title>OBJECTIVE</title> This scoping review aimed to synthesize empirical evidence on patient and public perceptions of AI in health care, focusing on perceived benefits, risks, and characterizing the recent literature by country, area of medicine, and AI type. </sec> <sec> <title>METHODS</title> We conducted a scoping review following PRISMA guidelines and Arksey and O’Malley’s 5-stage scoping review framework. We searched three databases—ACM Digital Library, PubMed, and Web of Science—for articles published between January 2020 and February 2024 that described the public/patient perceptions of AI in healthcare. </sec> <sec> <title>RESULTS</title> A total of 3,558 studies were screened, and 123 met the inclusion criteria. Most studies did not specify a medical domain (28, 20.9%) or an AI type (59, 41.8%). Among those that did, geriatrics/healthy aging (17, 12.7%), oncology (14, 10.4%), and mental health (11, 8.20%) were most common. Frequently studied technologies included robots (21, 14.9%), decision support systems (17, 12.1%), and chatbots (15, 10.6%). Reported benefits emphasized patient satisfaction (46, 25.8%) and efficiency (34, 19.1%), while risks centered on trust (59, 26.3%), privacy (47, 21.0%), and patient safety (27, 12.1%). </sec> <sec> <title>CONCLUSIONS</title> AI tools are generally perceived positively when they enhance patient satisfaction, streamline clinical workflows, and enable more personalized treatment. However, persistent concerns about data privacy, reduced human interaction, and a general lack of trust and acceptance remain. These challenges underscore the need to design AI tools with a focus on building trust through comprehensive data security guidelines and improved provider-patient communications. </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.316 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.177 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.575 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.468 Zit.