Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AB236. SOH26AB_0102. Artificial intelligence in emergency medicine: an evaluation of patient perceptions and expectations
0
Zitationen
12
Autoren
2026
Jahr
Abstract
Background: The role of artificial intelligence (AI) in healthcare is growing exponentially, with applications in triage, diagnostic imaging, and data interpretation. However, a limited understanding of how patients perceive AI remains, particularly in the emergency department (ED). As the central stakeholders, patient perspectives are crucial for the successful implementation of these technologies. This study aimed to evaluate patient attitudes and concerns regarding AI. Methods: Following audit approval (Clinical Audit; CA2025-106), a cross-sectional survey was conducted in Beaumont Hospital, Dublin. From June to August 2025, over 1,700 patients were approached, and 1,325 consented (76% response rate). The questionnaire gathered information on prior AI knowledge, perceived benefits and risks, comfort level regarding the use of AI in clinical tasks, and opinions concerning accountability. Quantitative data were analysed descriptively, while qualitative responses underwent content analysis. Results: The median age of respondents was 40–49 years old, 82% held a Leaving Certificate or higher, and 72% reported little to no prior knowledge of AI. Almost all patients (94%) wanted to be informed if AI was involved in their care. Although comfort levels varied depending on the task, 85% of patients wanted clinicians to retain final decision-making power, and 49.7% felt that accountability lies solely with doctors. Conclusions: Patients in the ED generally support the use of AI and feel it has the potential to save time and money. Importantly, comfort and acceptance were highest when AI was used as an adjunct rather than a replacement for clinicians. Integration strategies should ensure transparency, maintain clinical oversight, and align with patient expectations.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.527 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.419 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.909 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.578 Zit.