OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 10.05.2026, 05:36

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

A Prompt Engineering Framework for Large Language Model–Based Mental Health Chatbots: Conceptual Framework

2025·6 Zitationen·JMIR Mental HealthOpen Access
Volltext beim Verlag öffnen

6

Zitationen

2

Autoren

2025

Jahr

Abstract

BACKGROUND: Artificial intelligence (AI), particularly large language models (LLMs), presents a significant opportunity to transform mental healthcare through scalable, on-demand support. While LLM-powered chatbots may help reduce barriers to care, their integration into clinical settings raises critical concerns regarding safety, reliability, and ethical oversight. A structured framework is needed to capture their benefits while addressing inherent risks. This paper introduces a conceptual model for prompt engineering, outlining core design principles for the responsible development of LLM-based mental health chatbots. OBJECTIVE: This paper proposes a comprehensive, layered framework for prompt engineering that integrates evidence-based therapeutic models, adaptive technology, and ethical safeguards. The objective is to propose and outline a practical foundation for developing AI-driven mental health interventions that are safe, effective, and clinically relevant. METHODS: We outline a layered architecture for an LLM-based mental health chatbot. The design incorporates: (1) an input layer with proactive risk detection; (2) a dialogue engine featuring a user state database for personalization and Retrieval-Augmented Generation (RAG) to ground responses in evidence-based therapies such as Cognitive Behavioral Therapy (CBT), Acceptance and Commitment Therapy (ACT), and Dialectical Behavior Therapy (DBT); and (3) a multi-tiered safety system, including a post-generation ethical filter and a continuous learning loop with therapist oversight. RESULTS: The primary contribution is the framework itself, which systematically embeds clinical principles and ethical safeguards into system design. We also propose a comparative validation strategy to evaluate the framework's added value against a baseline model. Its components are explicitly mapped to the FAITA-MH and READI frameworks, ensuring alignment with current scholarly standards for responsible AI development. CONCLUSIONS: The framework offers a practical foundation for the responsible development of LLM-based mental health support. By outlining a layered architecture and aligning it with established evaluation standards, this work offers guidance for developing AI tools that are technically capable, safe, effective, and ethically sound. Future research should prioritize empirical validation of the framework through the phased, comparative approach introduced in this paper.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Digital Mental Health InterventionsArtificial Intelligence in Healthcare and EducationMental Health via Writing
Volltext beim Verlag öffnen