Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Generative AI in Primary Care: A qualitative study of UK General Practitioners’ Views (Preprint)
0
Zitationen
6
Autoren
2025
Jahr
Abstract
<sec> <title>BACKGROUND</title> The potential for generative AI (GenAI) to assist with clinical tasks is the subject of ongoing debate within biomedical informatics and related fields. </sec> <sec> <title>OBJECTIVE</title> This study aimed to explore general practitioners’ (GPs’) opinions about GenAI on primary care. </sec> <sec> <title>METHODS</title> In January 2025, we conducted a Web-based survey of 1005 UK GPs’ experiences and opinions of GenAI in clinical practice. This study involved a qualitative descriptive analysis of a written response (“comments”) to an open-ended question in the survey. </sec> <sec> <title>RESULTS</title> Comments were classified into 3 major themes and 8 subthemes in relation to GenAI in clinical practice. The major themes were: (1) unfamiliarity, (2) ambivalence and anxiety, and (3) role in clinical tasks. ‘Unfamiliarity’ encompassed lack of experience and knowledge, and the need for training on GenAI. ‘Ambivalence and anxiety’ included mixed expectations among GPs in relation to these tools, beliefs about diminished human connection, and skepticism about AI accountability. Finally, commenting on the role of GenAI in clinical tasks, GPs believed it would help with documentation. However, respondents questioned AI’s clinical judgment and raised concerns about operational uncertainty concerning these tools. </sec> <sec> <title>CONCLUSIONS</title> This study provides timely insights into GPs’ perspectives on the role, impact, and limitations of GenAI in primary care. A majority reported limited experience and training with these tools; however, many GPs perceived potential benefits of GenAI and ambient AI for documentation. Notably, two years after the widespread introduction of GenAI, GPs’ persistent lack of understanding and training remains a critical concern. More extensive qualitative work would provide a more in-depth understanding of GPs’ views. </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.