Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
#218 Artificial intelligence in the emergency department: a patient’s perspective
0
Zitationen
12
Autoren
2025
Jahr
Abstract
<h3>Introduction</h3> Artificial intelligence (AI) is increasingly applied in Emergency Medicine, supporting triage optimisation, diagnostic decision-making, and imaging interpretation. While the clinical evidence base for AI is expanding, there is limited understanding of how Emergency Department (ED) patients perceive, understand, and accept these technologies. Patient perspectives are vital to ensure that AI implementation in the ED is safe, transparent, and aligned with patient expectations. This study aimed to evaluate ED patients’ awareness, attitudes, and concerns regarding AI integration. <h3>Methods</h3> A cross-sectional survey was undertaken at Beaumont Hospital, Dublin, following audit approval (CA2025-106). From June to August 2025, 1,623 patients were approached and 1,088 responded (67% response rate). The questionnaire – developed from a literature review – gathered demographic data, prior AI exposure, perceived benefits and risks, and preferences for AI involvement in clinical decisions. Quantitative data were analysed descriptively, while qualitative responses underwent thematic analysis. <h3>Results</h3> The median age group was 40–49 years; 50% were female and 77% identified as White Irish. Most respondents (74%) reported little or no knowledge of AI. Almost all (94%) wanted to be informed if AI was used in their care with 83% preferring written or verbal communication. While 90% supported clinicians retaining final decision-making authority, comfort levels varied by task: 31% were comfortable with AI triaging patients (18% uncomfortable); 49% were comfortable with AI interpreting X-rays when a doctor made the final decision (7% uncomfortable); only 11% were comfortable with AI interpreting X-rays without doctor input (33% uncomfortable). Regarding accountability in AI-assisted care, 50% believed responsibility should rest solely with the doctor, while only 8% felt AI should be accountable. <h3>Conclusion</h3> ED patients generally support AI as a complementary tool rather than a replacement for clinicians. Acceptance is highest when AI augments, not substitutes, human judgement. Implementation strategies should emphasise transparency, preserve clinician oversight, and ensure AI applications align with patient priorities and comfort levels. <i>*presenting author</i>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.