Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AI in medical practice: doctors’ perspective on the benefits, challenges and facilitators of artificial intelligence scribe use
0
Zitationen
5
Autoren
2026
Jahr
Abstract
Abstract Purpose As artificial intelligence (AI) scribes become more common in clinical settings, understanding the human factors influencing their uptake is critical. This study investigates doctors’ perceptions of AI scribes, focusing on their benefits, challenges (risks or barriers) and facilitators of scribe use in medical practice. Methods We conducted focus groups with thirty-three medical practitioners (21 general practitioners and 12 medical specialists). Separate groups were conducted for AI scribe users and non-users. Benefits, challenges, and facilitators of AI scribe use were identified through researcher-coded qualitative analysis and synthesised into overarching themes. Results AI users had positive perceptions of scribes, reporting improvements to efficiency, quality of notes and doctor-patient interactions. The major challenge themes emerging across AI users and non-users were (1) insufficient knowledge about AI technology and data management, (2) errors produced by scribes, (3) medico-legal risks and responsibilities, (4) privacy concerns, (5) overreliance and de-skilling and (6) doctors losing control over decisions. Perceived facilitators to overcome these challenges included guidance on AI scribe best practice, regulation of AI technology, peer learning and the development of new skills and workflows for using AI. Conclusions Doctors in this study saw promise in AI scribes reducing their administrative burden and streamlining clinical documentation. However, AI errors, limited knowledge of the technology and data management, and fears about how AI might change clinical work are significant barriers for some doctors. Clear policy and guidance from leading medical bodies, as well as ongoing research on real-world implementation, will play an important role in supporting responsible adoption in medical practice.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.102 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.468 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.