Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Investigating the Potential Effects of Medical AI Systems on Physician Autonomy: Pretest of a Semi-Structured Qualitative Interview Guide (Preprint)
0
Zitationen
5
Autoren
2026
Jahr
Abstract
<sec> <title>BACKGROUND</title> Artificial intelligence (AI) is an increasingly prominent feature of contemporary healthcare, with medical AI systems beginning to support diagnostic and therapeutic processes in many clinical domains. Alongside the anticipated benefits of these technologies, their introduction also raises broader questions about how clinical work and professional roles may change. In particular, medical AI systems may affect physician autonomy, a key factor influencing the acceptance and long-term implementation of new medical technologies. </sec> <sec> <title>OBJECTIVE</title> The aim of this study was to develop and pretest a semi-structured interview guide concerning the potential effects of medical AI systems on physician autonomy. </sec> <sec> <title>METHODS</title> The interview guide was theoretically grounded in a seven-component model of physician autonomy proposed by Schulz and Harrison. Semi-structured qualitative interviews were conducted with a sample of seven hospital physicians. Interview recordings were transcribed and analyzed using a hybrid inductive–deductive thematic approach: themes were first identified inductively from participant responses and subsequently mapped onto the seven-component model of physician autonomy proposed by Schulz and Harrison. Data were analyzed to assess both the potential effects of medical AI systems on physician autonomy and the methodological adequacy of the interview guide. </sec> <sec> <title>RESULTS</title> Most participants did not express strong concerns about losing clinical autonomy through the introduction of AI systems. However, several autonomy-related risks were identified, including potential deskilling, automation bias, limited system explainability, and increasing economic or cost-related pressures. Participants emphasized that AI should serve as a supportive tool rather than a substitute for physician judgment. All physicians agreed that AI systems should not replace clinicians as primary clinical decision-makers. </sec> <sec> <title>CONCLUSIONS</title> Medical AI was largely viewed as compatible with physician autonomy, yet participants highlighted important risks that warrant attention in future research and system design. Our findings suggest that autonomy-related concerns extend beyond direct loss of decision-making authority and include broader professional, cognitive, and organizational dimensions. However, our inductively identified themes and subthemes did not fully reflect all components of physician autonomy, indicating the need for further refinement of how to assess physician autonomy in qualitative research. </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.391 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.257 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.685 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.501 Zit.