Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
218 Scoping Review of Patient Safety Implications of AI-Facilitated Synchronous Communication in Cross-Cultural Consultations with Refugees and Migrants
0
Zitationen
8
Autoren
2025
Jahr
Abstract
Abstract PTH 6: Health Policy and Health Services 1, B307 (FCSH), September 4, 2025, 16:30 - 17:30 Access to and availability of interpreters for patients in cross-cultural consultations, is considered a critical global health adaptation. Interpreter provision can pose a challenge in some healthcare settings, leaving refugees and migrants at risk of suboptimal care and potential gaps in service delivery. Artificial Intelligence(AI) is increasingly used as a pragmatic alternative to in-person or telephone interpreting in healthcare settings that experience interpreter provision challenges. However we do not know the patient safety implications of using AI for this purpose. The research aim is to identify and map the available evidence on the use of AI to facilitate synchronous communication between a refugee or migrant and their GP, focusing on patient safety implications. Following latest JBI guidance, we conducted a search of 5 databases covering the period July 2017 to June 2024. We also conducted an extensive search of the grey literature and keywords in social media. Data was extracted and synthesised to report the current evidence on the use of AI to interpret synchronous communication in a variety of healthcare settings. We screened 220 articles covering diverse healthcare settings, resulting in five international studies and conference papers included in this review. We found frequent use of AI-powered applications, specifically Google Translate, which is not designed for communication in medical contexts, to address language barriers with refugees and migrants across diverse clinical settings. While some benefits of Google Translate are reported, most experiences are negative. The patient safety risks associated with relying on AI to interpret are not widely examined. The implications of using AI to interpret synchronous communication between refugees or migrants and their GP is under-researched. There is an urgent need for comprehensive guidance for GPs on the use of AI tools in cross-cultural consultations, as their growing adoption highlights potential risks to patient safety and communication accuracy.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.292 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.143 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.539 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.452 Zit.