OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 24.03.2026, 23:59

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Aiding Large Language Models Using Clinical Scoresheets for Neurobehavioral Diagnostic Classification From Text: Algorithm Development and Validation (Preprint)

2025·0 ZitationenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

7

Autoren

2025

Jahr

Abstract

<sec> <title>BACKGROUND</title> Large language models (LLMs) have demonstrated the ability to perform complex tasks traditionally requiring human intelligence. However, their use in automated diagnostics for psychiatry and behavioral sciences remains under-studied. </sec> <sec> <title>OBJECTIVE</title> This study aimed to evaluate whether incorporating structured clinical assessment scales improved the diagnostic performance of LLM-based chatbots for neuropsychiatric conditions (we evaluated autism spectrum disorder, aphasia, and depression datasets) across two prompting strategies: (1) direct diagnosis and (2) code generation. We aimed to contextualize LLM-based diagnostic performance by benchmarking it against prior work that applied traditional machine learning classifiers to the same datasets, allowing us to assess whether LLMs offer competitive or complementary capabilities in clinical classification tasks. </sec> <sec> <title>METHODS</title> We tested two approaches using ChatGPT, Gemini, and Claude models: (1) direct diagnostic querying and (2) execution of chatbot-generated code for classification. Three diagnostic datasets were used: ASDBank (autism spectrum disorder), AphasiaBank (aphasia), and Distress Analysis Interview Corpus-Wizard-of-Oz interviews (depression and related conditions). Each approach was evaluated with and without the aid of clinical assessment scales. Performance was compared to existing machine learning benchmarks on these datasets. </sec> <sec> <title>RESULTS</title> Across all 3 datasets, incorporating clinical assessment scales led to little improvement in performance, and results remained inconsistent and generally below those reported in previous studies. On the AphasiaBank dataset, the direct diagnosis approach using ChatGPT with GPT-4 produced a low &lt;i&gt;F&lt;/i&gt;&lt;sub&gt;1&lt;/sub&gt;-score of 65.6% and specificity of 33%. The code generation method improved results, with ChatGPT with GPT-4o reaching an &lt;i&gt;F&lt;/i&gt;&lt;sub&gt;1&lt;/sub&gt;-score of 81.4%, specificity of 78.6%, and sensitivity of 84.3%. ChatGPT with GPT-o3 and Gemini 2.5 Pro performed even better, with &lt;i&gt;F&lt;/i&gt;&lt;sub&gt;1&lt;/sub&gt;-scores of 86.5% and 84.3%, respectively. For the ASDBank dataset, direct diagnosis results were lower, with &lt;i&gt;F&lt;/i&gt;&lt;sub&gt;1&lt;/sub&gt;-scores of 56% for ChatGPT with GPT-4 and 54% for ChatGPT with GPT-4o. Under code generation, ChatGPT with GPT-o3 reached 67.9%, and Claude 3.5 performed reasonably well with 60%. Gemini 2.5 Pro failed to respond under this assessment condition. In the Distress Analysis Interview Corpus-Wizard-of-Oz dataset, direct diagnosis yielded high accuracy (70.9%) but poor &lt;i&gt;F&lt;/i&gt;&lt;sub&gt;1&lt;/sub&gt;-scores of 8% using ChatGPT with GPT-4o. Code generation improved specificity—88.6% with ChatGPT with GPT-4o—but &lt;i&gt;F&lt;/i&gt;&lt;sub&gt;1&lt;/sub&gt;-scores remained low overall. These findings suggest that, while clinical scales may help structure outputs, prompting alone remains insufficient for consistent diagnostic accuracy. </sec> <sec> <title>CONCLUSIONS</title> Current LLM-based chatbots, when prompted naively, underperform on psychiatric and behavioral diagnostic tasks compared to specialized machine learning models. Clinical assessment scales might modestly aid chatbot performance, but more sophisticated prompt engineering and domain integration are likely required to reach clinically actionable standards. </sec>

Ähnliche Arbeiten

Autoren

Themen

Biomedical Text Mining and OntologiesArtificial Intelligence in Healthcare and EducationClinical Reasoning and Diagnostic Skills
Volltext beim Verlag öffnen