Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The use of artificial intelligence in healthcare as perceived by the citizens and patients: a narrative review of the literature
0
Zitationen
7
Autoren
2025
Jahr
Abstract
The growth of scientific literature on large language models (LLMs), such as ChatGPT, anticipates their central role for accessing health information but poses potential risks, including the false belief that artificial intelligence (AI) could replace doctors in providing reliable information. Our study, part of the Slow AI project launched in partnership with the Slow Medicine ETS Association, reviewed the literature on ChatGPT use by the public, analyzing citizens' and patients' perceptions of using AI for health-related questions, identifying key benefits and concerns, and providing recommendations for the safe and effective use of LLMs. We conducted a narrative review following PRISMA guidelines, including qualitative, quantitative, and mixed-methods studies, selected through a search of the PubMed database. Data were extracted and analyzed using a predefined form. Out of 388 records, 120 studies were included, primarily from the USA (65), Europe (19), and Asia (15). Most studies focused on general medicine (37), with patients (57) being the main participants. Key findings include that LLMs improve access to health information, aiding diagnostic accuracy and patient understanding. However, risks exist, such as inaccurate or outdated information, lack of empathy, and privacy concerns. These challenges highlight the need for reliable AI training with real-world data and clinician oversight to mitigate risks. Lastly, while LLMs can improve communication, they should complement, not replace human interaction. LLMs in healthcare offer great potential but also present risks. Safeguards and clinician oversight are crucial to preserve patient safety and doctor-patient relationship.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.102 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.468 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.