Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Development and Validation of the Automated Safety Testing and Reporting Application (ASTRA): A Tool for Conversational Safety Monitoring of Generative AI Tools for Mental Health (Preprint)
0
Zitationen
10
Autoren
2026
Jahr
Abstract
<sec> <title>BACKGROUND</title> AI-based conversational tools are rapidly expanding within mental health care as a means of increasing access and scalability. At the same time, these systems introduce distinct safety risks arising from both user disclosures (e.g., self-harm ideation) and inappropriate or inadequate AI responses. </sec> <sec> <title>OBJECTIVE</title> The objective of this study was to develop and evaluate the Automated Safety Testing and Reporting Application (ASTRA), an external system intended to identify clinically relevant risk-behaviors across entire AI-mediated mental health conversations. </sec> <sec> <title>METHODS</title> ASTRA was tested on a dataset of 100 synthetic therapeutic conversations written by licensed clinicians to reflect risk-behaviors and harmful responses between users and AI tools. Conversations varied in length and included both subtle and overt risk-behavior examples across eight predefined categories. Human coder consensus ratings served as the reference standard. ASTRA’s classifications were evaluated across two prompt iterations using standard diagnostic performance metrics and agreement statistics. </sec> <sec> <title>RESULTS</title> ASTRA demonstrated consistently high concordance with expert human ratings across all categories. Accuracy exceeded 0.90 for all risk-behavior categories examined, with specificity uniformly high and sensitivity varying by category (range 0.55-1.00). Agreement beyond chance was substantial to almost perfect between ASTRA and human raters (κ = 0.65 - 1.00). McNemar’s tests indicated no evidence of systematic bias toward false positives or false negatives. Detection of user self-harm indicators was particularly accurate, even in conversations where risk was expressed subtly. </sec> <sec> <title>CONCLUSIONS</title> In this initial validation study, ASTRA reliably identified multiple forms of mental health-related risk-behaviors at the conversation level. These findings support the feasibility of independent safety-monitoring systems as a complement to AI tools used in mental health contexts and underscore the need for further evaluation using larger and real-world datasets. </sec>
Ähnliche Arbeiten
Amazon's Mechanical Turk
2011 · 10.029 Zit.
The Transtheoretical Model of Health Behavior Change
1997 · 7.685 Zit.
COVID-19 and mental health: A review of the existing literature
2020 · 3.707 Zit.
Cognitive Therapy and the Emotional Disorders
1977 · 2.931 Zit.
Mental health problems and social media exposure during COVID-19 outbreak
2020 · 2.790 Zit.