OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 28.03.2026, 01:21

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

The ChatGPT therapist will see you now: Navigating generative artificial intelligence's potential in addiction medicine research and patient care

2023·14 Zitationen·AddictionOpen Access
Volltext beim Verlag öffnen

14

Zitationen

4

Autoren

2023

Jahr

Abstract

Generative AI offers potential for enhancing addiction medicine research and practice by analyzing medical literature, improving research efficiency, streamlining clinical workflows, and even providing counseling support. Addressing challenges such as confabulation, biases, and patient acceptance and adoption is crucial for responsible integration and to improve care in substance use disorders. Artificial intelligence (AI), primarily in decision support and predictive modeling, has increasingly been used in medicine, including the field of addiction [1]. With the widespread release of ChatGPT, there is now active interest and need for rigorous evaluation of the potential for generative AI to enhance addiction medicine research and practice. ChatGPT is a chatbot powered by an underlying large language model (LLM), named Generative Pre-trained Transformer (GPT), which can simulate human conversation. LLMs are trained on extensive text datasets to predict the next word in a given sequence [2]. As LLMs have grown, they have demonstrated emergent properties including question answering, summarization and even reasoning. Applying these models to tasks requiring medical knowledge has yielded impressive outcomes, with GPT-4, the model behind ChatGPT+, correctly answering 90% of presented United States Medical Licensing Exam questions [3]. Generative AI holds the potential to transform medical research and one notable application is using LLMs' ability to analyze vast amounts of literature to identify prior work that informs current research. However, researchers using LLMs for this purpose still need a deep understanding of the subject matter as LLMs are prone to convincingly present incorrect information, also known as confabulation. Elicit.org is an example of a tool that uses LLMs to automate literature review by producing a list of relevant literature, with a concise summary, in response to a user's question. Notably LLMs used in this tool have to be constrained using additional AI tools to limit confabulated information in the results [4]. At their core, LLMs are transformer models that seek to understand relationships between data. Fouladvand et al. [5, 6] used a transformer model to predict opioid use disorder from multiple data sources, relying on the transformer model to extract associations within and between data sources. Notably, however, this model did not include unstructured text based data, which remains challenging to work with. Our group is working on developing machine learning models to predict retention in treatment for patients with opioid use disorder with the goal of determining if unstructured electronic health record (EHR) data, unlocked with LLMs and combined with structured EHR data, will improve model performance [7]. Similar techniques of combining LLMs with structured data have already been deployed to more efficiently identify clinical trial cohorts [8]. These techniques of using LLMs alongside, and not in replacement of, more traditional approaches may limit the risk of confabulation. The text generation capabilities of LLMs offer promising possibilities in academic publishing. Researchers have hypothesized that LLMs could contribute to everything from automated manuscript editing to composing complete manuscripts. Nevertheless, without addressing their shortfalls, LLMs may not be trusted to perform such tasks. The limitations of generative AI were central to a debate that ensued after ChatGPT was listed as an author on a preprint manuscript uploaded to medRxiv in December 2022 [9]. Some argued that the term “author” was misleading because ChatGPT could not take responsibility for the works' validity. Although tools like ChatGPT will likely improve in accuracy overtime, the responsibility for ensuring that accuracy ultimately rests with humans. To harness the full potential of generative AI in medical research, it is crucial that we develop robust guidelines and validation processes to ensure the responsible and accurate utilization of these models. The potential applications of LLMs in clinical medicine are extensive; however, it remains unclear whether they can fully realize their theorized potential. Multiple companies market tools with the ability to summarize historical patient records and streamline encounter documentation. Collaborations between Microsoft and Epic are underway to implement GPT-4 based tools into EHRs, with select pilot sites already testing automated message drafting tools [10]. Remarkably, in response to real-world patient questions, ChatGPT garnered higher quality ratings and preference over physician-generated responses, underscoring its potential to enhance the efficiency and effectiveness of clinical communication [11]. Innovations that can reduce the time physicians spend in EHRs and increase the time physicians can dedicate to patients would certainly be welcome and could possibly help alleviate physician burnout. The integration of LLMs into healthcare systems holds great promise for optimizing workflows, but it remains to be seen if their role will be limited to that of a scribe or extend to active contribution in clinical decision making. In the realm of addiction medicine, there are unique and promising opportunities because of the nature of the field being reliant on behavioral interventions and mutual support. Accessing counseling for substance use disorders can be challenging, but chatbot tools powered by LLMs have the potential to increase access to these important forms of treatment. Use of Woebot, a chatbot designed to treat substance use disorders based on psychotherapy principles, reduced substance use when compared to patient's on a therapy waitlist [12]. Sharma et al. [13] demonstrated that peer support augmented by a generative AI tool improved empathetic responses toward patients. Incorporating AI into psychotherapy carries inherent risks, as confabulation by a model powering a psychotherapy tool has the potential to cause genuine harm to a patient. Given that a positive psychotherapist-patient relationship is associated with improved outcomes, it is unknown if these outcomes can be maintained with chatbots that can mimic, but not possess genuine empathy and understanding. The effectiveness of these new tools will require evaluation and comparison to the current standard of care and it will be crucial to consider the implications of reduced and perhaps absent human to human relationships in addiction treatment [14]. The incorporation of generative AI into addiction medicine research and practice is inevitable and will present a unique set of challenges. A Pew Research Center survey found the majority of people surveyed were uncomfortable with the use of AI by their healthcare providers [15]. When used in the care of socially vulnerable populations, including individuals who use illicit drugs, the risks associated with these tools are further amplified. It is our responsibility to ensure that generative AI is implemented in a manner that enhances addiction research and care, while adhering to the ethical and humanistic values central to medicine. Steven Tate: Conceptualization (equal); writing—original draft (lead); writing—review and editing (equal). Sajjad Fouladvand: Conceptualization (equal); writing—original draft (supporting); writing—review and editing (equal). Jonathan Chen: Conceptualization (equal); supervision (equal); writing—review and editing (supporting). Chwen-Yuen Angie Chen: Conceptualization (equal); supervision (equal); writing—review and editing (supporting). The project described was supported by Grant Number UG1 DA015815 from the National Institute on Drug Abuse (NIDA). Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the National Institute on Drug Abuse (NIDA) or NIH. We thank Anna Lembke for her comments on an early draft of this work. S.T., S.F. and J.H.C. have received support from the NIDA CTN grant 0136. J.H.C. has received research funding support from the Stanford Artificial Intelligence in Medicine and Imaging - Human-Centered Artificial Intelligence (AIMI-HAI) Partnership, Google, Inc. Research collaboration to leverage EHR data to predict a range of clinical outcomes, and the American Heart Association Strategically Focused Research Network on the Science of Diversity in Clinical Trials. J.H.C. is the co-founder of Reaction Explorer LLC that develops and licenses organic chemistry education software and has received paid consulting fees from Sutton Pierce and Younker Hyde MacFarlane PLLC as a medical expert witness. Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationMachine Learning in HealthcareCOVID-19 diagnosis using AI
Volltext beim Verlag öffnen