Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Priorities for artificial intelligence education: clinicians’ perspectives
0
Zitationen
3
Autoren
2026
Jahr
Abstract
Objective Educating clinicians about artificial intelligence (AI) is urgent as the UK General Medical Council places liability with practitioners and the European Union AI Act with employers for appropriate training, but also because AI, like any tool, requires training to use safely. National Health Service England (NHSE) Capability Framework provides guidance, but frontline clinicians’ perspectives are unknown, so we sought to identify their priorities. Methods and analysis Iterative interviews with residents, educators and experts synthesised 10 contextualised AI-related problem statements. We surveyed residents and consultant-educators in the East of England, who rated their confidence and importance. Participants also ranked their preferred learning modality. Results We received 317 responses. Clinicians’ priorities, defined by high importance (I) and low confidence (C), were: ‘understanding liability implications’ (I: 40%; C: 1.82/5), ‘determining appropriate levels of confidence in AI algorithms’ (I: 36.5%; C: 1.98/5) and ‘mitigating security and privacy risks’ (I: 34%; C: 1.68). Confidence was low (mean 20, range 10–50), with no significant difference between educators and residents. Residents preferred integration of training into regional teaching, while consultant–educators favoured webinars. Conclusion Our findings show that clinicians prioritise practical concerns, such as liability and determining confidence in algorithmic outputs. In contrast, critical appraisal and explaining AI to patients were deprioritised, despite their relevance to clinical safety. This study enhances the NHSE Capability Framework by contextualising AI-related capabilities for clinicians as users and identifying priorities with which to develop scalable training.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.102 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.468 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.