Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Clinician perceptions of artificial intelligence in care: A mixed methods study (Preprint)
0
Zitationen
4
Autoren
2026
Jahr
Abstract
<sec> <title>BACKGROUND</title> AI is increasingly being explored as a tool to enhance efficiency, access, and diagnostic accuracy in mental health care. However, the perspectives on the use of AI from clinicians, who are central to the delivery and oversight of care, remain underexamined. </sec> <sec> <title>OBJECTIVE</title> This study aimed to explore clinicians' perceptions of AI in clinical care, including perceived benefits, risks, barriers to implementation, and training needs. </sec> <sec> <title>METHODS</title> A cross-sectional mixed-methods survey was distributed from August to November 2024 to mental health professionals in the US. Quantitative data were analyzed using descriptive statistics, while open-ended responses were analyzed thematically to identify key insights. </sec> <sec> <title>RESULTS</title> Respondents (n=62) were mostly not currently using AI in practice. The most frequently endorsed benefits included reductions in clinical workload, improved efficiency, and enhanced data analysis. However, a majority expressed discomfort using AI in patient care, and concerns were raised about inaccurate outputs, algorithmic bias, privacy, and weakened therapeutic rapport. Barriers to adoption included clinician resistance, lack of validation, and challenges with technical integration. Most respondents believed that specialized training in AI ethics and applications is important for clinicians. Qualitative findings reinforced concerns about dehumanization, cultural insensitivity, ethical accountability, and insufficient technological literacy. </sec> <sec> <title>CONCLUSIONS</title> Mental health professionals view AI as a potentially useful adjunct, but not a replacement, in care. Ethical concerns, limited trust, and a strong emphasis on the human dimensions of therapy suggest that implementation must proceed with caution. Clinician-informed strategies, ethical frameworks, and targeted training are essential to support the responsible and effective integration of AI into mental health practice. </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.485 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.371 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.827 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.549 Zit.