Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
LLM-Based Chatbot to Reduce Mental Illness Stigma in Healthcare Providers
6
Zitationen
3
Autoren
2025
Jahr
Abstract
Mental illness stigma in healthcare providers negatively impacts patient care and well-being, yet existing interventions to mitigate this stigma are often resource-intensive and difficult to scale. This paper presents the design and evaluation of a conversational agent (CA) named Stigma Educational Bot 1.0, powered by a large language model (LLM), specifically GPT-4. The CA aims to reduce stigma among healthcare providers by delivering an educational program grounded in anti-stigma principles and behavior change theories across four modules. The CA was developed using the GPT Editor interface, incorporating tailored instructions and a curated knowledge base from evidence-based resources, including materials from the Pan American Health Organization. Its performance was evaluated through four key tasks: direct question answering, generation of case scenarios, identification of stigma in simulated clinical interactions, and generation of empathetic testimonies. Expert evaluators assessed the CA's outputs using a 5-point Likert scale and performance metrics such as precision and recall. Results indicate that the CA excels in delivering structured educational content on topics like “Conditions” (scores of 5.0) but shows limitations in foundational concepts like “Definition” (scores as low as 2.5). It demonstrated high consistency in language and style when generating case scenarios and testimonies, with perceived empathy scores up to 4.5. However, the CA exhibited moderate performance in identifying stigmatizing behaviors, with F1 scores ranging from 0.48 to 0.54 and lower recall rates, particularly for overt manifestations of stigma. The study highlights the potential of GPT-based conversational agents as scalable tools for stigma reduction among healthcare providers by offering accessible and interactive educational interventions. Limitations include reliance on simulated tasks, specific training materials, and moderate performance in stigma detection. Future work should focus on enhancing foundational knowledge delivery, improving the identification of overt stigmatizing behaviors, and assessing real-world impact.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.324 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.189 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.588 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.470 Zit.