OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 15.03.2026, 02:28

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Generative artificial intelligence in mental health care: potential benefits and current challenges

2024·69 Zitationen·World PsychiatryOpen Access
Volltext beim Verlag öffnen

69

Zitationen

2

Autoren

2024

Jahr

Abstract

The potential of artificial intelligence (AI) in health care is being intensively discussed, given the easy accessibility of programs such as ChatGPT. While it is usually acknowledged that this technology will never replace clinicians, we should be aware of imminent changes around AI supporting: a) routine office work such as billing, b) clinical documentation, c) medical education, and d) routine monitoring of symptoms. These changes will likely happen rapidly. In summer 2023, the largest electronic medical records provider in the US, Epic Systems, announced that it is partnering with OpenAI to integrate ChatGPT technology1. The profound impact that these changes will have on the context and delivery of mental health care warrants attention, but often overlooked is the more fundamental question of changes to the nature of mental health care in terms of improving prevention, diagnosis and treatments. Research on non-clinical samples suggests that AI may augment text-based support programs, but assessments have focused on perceived empathy rather than clinical outcomes. While the former is an important development, it is only the first step towards progressing from feasibility to acceptability and from efficacy to effectiveness. A century of accessible self-help books, more than 60 years of mental health chatbots (Eliza was created in 1959), nearly 30 years of home Internet with access to free online cognitive behavioral therapy and chatrooms, over a decade of smartphone-based mental health apps and text message support programs, and the recent expansion of video-based telehealth, together highlight that access to resources is not a panacea for prevention. The true target for AI preventive programs should not be replicating previous work but rather developing new models able to provide personalized, environmentally and culturally responsive, and scalable support that works effectively for users across all countries and regions. Computer-based diagnosis programs have existed for decades and have not transformed care. Many studies to date suggest that new AI models can diagnose mental health conditions in the context of standardized exam questions or simple case examples2. This is important research, and there is evidence of improvement with new models, but the approach belies the clinical reality of how diagnosis is made or utilized in clinical care. The future of diagnosis in the 21st century can be more inclusive, draw from diverse sources of information, and be outcomes-driven. The true target for AI programs will be to integrate information from clinical exam, patient self-report, digital phenotyping, genetics, neuroimaging, and clinical judgement into novel diagnostic categories that may better reflect the underlying nature of mental illness and offer practical value in guiding effective treatments and cures. Currently, there is a lack of evidence about how AI programs can guide mental health treatment. Impressive studies show that AI can help select psychiatric medications3, but these studies often rely on complete and labelled data sets, which is not the clinical reality, and lack prospective validation. A recent study in oncology points to an emerging challenge: when ChatGPT 3.5 was asked to provide cancer treatment recommendations, the chatbot was most likely to mix incorrect recommendations with correct ones, making errors difficult to detect even for experts4. The true target for AI programs will be in realizing the potential of personalized psychiatry and guiding treatment that will improve outcomes for patients. For AI to support prevention, diagnosis and treatment there are clear next steps. Utilizing a well-established framework for technology evaluation in mental health, these include advances in equity, privacy, evidence, clinical engagement, and interoperability5. Since current datasets used in AI models are trained on non-psychiatric sources, today all major AI chatbots clearly state that their products must not be used for clinical purposes. Even with proper training, risks of AI bias must be carefully explored, given numerous recent examples of clear harm in other medical fields6. A rapid glance at images generated by an AI program when asked to draw “schizophrenia”7 visualized the extent to which extreme stigma and harmful bias have informed what current AI models conceptualize as mental illness. A second area of focus is privacy, with current AI chatbots unable to protect personal health information. Large language models are trained on data scraped from the Internet which may encompass sensitive personal health information. The European Union is exploring whether OpenAI's ChatGPT complies with the General Data Protection Regulation's requirement that informed consent or strong public health justifications are met to process sensitive information. In the US, privacy issues emerge with the risk that clinicians may input sensitive patient data into chatbots. This problem caused the American Psychiatric Association to release an advisory in summer 2023 noting that clinicians should not enter any patient information into any AI chatbot8. In order to allow integration into health care, authorities will need to determine whether chatbots meet privacy regulations. A third focus is the next generation of evidence, as current studies that suggest the ability of chatbots to perform binary classification of diagnosis (e.g., presence of any depression or none) offer limited practical clinical value. The potential to offer differential diagnosis based on multimodal data sources (e.g., medical records, genetic results, neuroimaging data) remains appealing but as yet untested. Evidence of the true potential for supporting care remains elusive, and the harm caused to the eating disorder community by the public release (and rapid repudiation within one week) of the Tessa chatbot highlights that more robust evidence is necessary than that currently collected9. Like other medical devices, evidence of clinical claims should be supported by high-quality randomized controlled trials that employ digital placebo groups (e.g., a non-therapeutic chatbot). Fourth, a focus on engagement is critical. We already know that engagement with mental health apps has been minimal, and can learn from those experiences. We are aware that engagement is not only a patient challenge, as clinician uptake of this technology is also a widely cited barrier and will require careful attention to implementation frameworks. These consistently highlight that, while innovation is important, there must be a concomitant focus on the recipients (i.e., education and training for both patients and clinicians) as well as on the context of care (e.g., regulation, reimbursement, clinical workflow). The principles of the non-adoption, abandonment, scale-up, spread and sustainability (NASSS) framework remain relevant in AI and offer tangible targets for avoiding failure. Fifth and related, AI models need to be well integrated into the health care system. The era of standalone or self-help programs is rapidly ending, with the realization that such tools may often fragment care, cannot scale, and are rarely sustainable. This requires, in addition to data interoperability, careful designing of how AI interacts with all aspects of the health care system. There is a need for collaboration not only with clinicians but also with patients, family members, administrators, regulators, and of course AI developers. While generative AI technologies continue to evolve, the clinical community today has the opportunity to evolve as well. Clinicians do not need to become experts in generative AI, but a new focus on education about current capabilities, risks and benefits can be a tangible first step towards more informed decision-making around what role these technologies can and should play in care.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationAutopsy Techniques and OutcomesMachine Learning in Healthcare
Volltext beim Verlag öffnen