Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Can AI Technologies Support Automating Clinical Supervision? Assessing the Potential of ChatGPT
2
Zitationen
17
Autoren
2024
Jahr
Abstract
Clinical supervision is an essential element in trainees, preventing burnout and ensuring the effectiveness of their intervention. AI technologies offer increasing possibilities for the development of clinical practices, among it supervision appears to possess the ideal characteristics for automation processes. In this study we test the capabilities of ChatGPT-4 to provide supervisory feedback by then placing them in comparison with feedback produced by a qualified supervisor. Two ChatGPT-4-generated feedbacks (the first one produced by a naïve identity and the second one by a trained identity) and one human-produced feedback were evaluated by means of a liking questionnaire, which was filled out to a group of gestalt psychotherapy trainees. Principal Component Analysis (PCA) highlighted 4 components of the questionnaire: relational and emotional dimensions (C1), didactic and technical quality (C2), treatment support and development (C3), and professional orientation and adaptability (C4). The ratings of satisfaction, obtained from the three supervisory feedbacks, were compared by applying 1-way analysis of variance (ANOVA). Statistical evaluations were performed using the statistical package SPSS version 25. The feedback generated by pre-trained AI (f2) was rated significantly higher than the other two (untrained AI feedback (f1) and human feedback (f3) in C4; in C1 the superiority of f2 over f1 but not over f3 appears significant. These results suggest that the use of pre-trained AI may be an appreciable option for increasing the effectiveness of clinical supervision in some specific areas, including especially career guidance.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.