Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Exploring Mental Health Professionals’ Perceptions and Acceptance of AI-based Screening Tools
0
Zitationen
3
Autoren
2026
Jahr
Abstract
INTRODUCTION: Artificial intelligence (AI) is being widely incorporated into healthcare, including mental health screening, showing promise for improving efficiency, early detection, and accessibility. Capturing mental health professionals' perceptions and acceptance of AI in mental health screening is essential for its ethical and effective implementation. MATERIALS AND METHODS: This qualitative study, using purposive sampling, explored the views of psychiatrists, clinical psychologists, psychiatric social workers, and postgraduate psychiatry trainees. In-depth, semi-structured interviews were conducted to collect data on participants' attitudes toward AI, perceived usefulness, ease of use, trust in AI-generated assessments, and ethical concerns. Thematic analysis was used to analyse the interviews. RESULTS: A cautiously optimistic attitude among mental health professionals regarding the use of AI in mental health screening emerged from the thematic analysis. Key themes included AI as a supportive but limited tool; irreplaceable clinical judgement; conditional trust in AI based on the context and complexity of cases; ethical and privacy concerns; the need for empirical validation; and concerns regarding clinical safety due to potential false positives and negatives. CONCLUSION: The potential of AI to improve access and efficiency in screening, particularly for triage purposes, was acknowledged by mental health professionals. However, trust in AI was conditional and depended on transparency, empirical evidence, and preservation of clinician oversight. AI in mental health screening was viewed as a tool to support, not replace, clinical expertise.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.100 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.466 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.