Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Perceptions and Ethical Concerns Regarding the Use of Artificial Intelligence in Mental Healthcare Among the Mental-Health Workforce: A Cross-Sectional Study
0
Zitationen
4
Autoren
2026
Jahr
Abstract
INTRODUCTION: Artificial intelligence (AI) is increasingly incorporated into mental healthcare, offering opportunities to improve diagnostic accuracy, service accessibility, and administrative efficiency. However, effective implementation depends on the mental health workforce's awareness, perceptions, and ethical concerns related to AI. Evidence regarding these factors remains limited within diverse practice settings in the United States (US). METHODS: A descriptive cross-sectional survey was conducted among mental health professionals and trainees practicing in two US states: California and New Jersey. Data were collected using a structured, self-administered online questionnaire adapted from the validated Shinners Artificial Intelligence Perception (SHAIP) scale and expanded to include an ethical concern domain. The survey assessed demographic characteristics, AI awareness, perceptions, ethical concerns, and barriers and facilitators to AI adoption. Statistical analyses were performed using Statistical Product and Service Solutions (SPSS, version 26; IBM SPSS Statistics for Windows, Armonk, NY). Descriptive statistics summarized participant characteristics, while inferential analyses, including chi-square, Mann-Whitney U, and Kruskal-Wallis tests, were used to examine bivariate differences in AI-related outcomes across participant characteristics, with statistical significance set at p < 0.05. RESULTS: A total of 220 mental health professionals and trainees participated, representing psychiatry, psychology, and allied mental health disciplines. Although 63.2% reported formal training in AI, only 39.6% reported using AI-assisted systems in clinical practice. Overall awareness and perceptions of AI were positive, with mean scores ranging from 3.45 to 3.75 across awareness and perception domains. Participants endorsed AI's potential to enhance diagnostic accuracy, reduce administrative workload, and improve access to mental health services. Ethical concerns were prominent, particularly regarding potential bias in clinical decision-making (mean = 3.81) and data privacy and security (mean = 3.72). Familiarity with AI concepts, years of clinical experience, and formal AI training were significantly associated with awareness, perception, and ethical-awareness scores (p < 0.05), while age and gender were not. CONCLUSION: Mental health workforce demonstrated favorable attitudes toward AI in mental healthcare but reported limited real-world adoption and substantial ethical concerns. These findings underscore the need for targeted education, robust ethical frameworks, and practical training to bridge the gap between AI awareness and responsible clinical implementation.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.553 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.444 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.943 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.792 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.