OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 02.05.2026, 17:53

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Evaluating large language models' performance in answering common questions on drug-induced liver injury

2025·3 Zitationen·JHEP ReportsOpen Access
Volltext beim Verlag öffnen

3

Zitationen

14

Autoren

2025

Jahr

Abstract

Background & Aims: Drug-induced liver injury (DILI) is a complex condition often linked to medication behaviors, with patient education having a crucial role in optimizing outcomes. Large language models (LLMs) could serve as promising tools for scalable patient support, but their utility remains unclear. This study systematically evaluated the capability of six popular open- and closed-source LLMs in addressing common DILI-related queries, focusing on patient-centered education. Methods: Twenty-eight frequently asked DILI questions were collected with input from hepatologists and patients (n = 15), and categorized into six clinical domains. Responses from six LLMs (GPT-4, GPT-3.5, Claude-2, Claude-1.3, Gemini, and LLaMA-3.1-405B) were anonymized, randomized, and independently evaluated by three hepatologists for accuracy, comprehensiveness, and safety. Additional analyses included automated readability assessment, domain-specific analysis, detailed expert-led error analysis, and direct comparison with physician responses. Results: <0.05). Error analysis showed that omission of crucial information accounted for 72% of errors, predominantly in GPT-3.5-Turbo, whereas hallucinations were rare (<10%) but notable in LLaMA outputs. Conclusion: This study represents the first systematic evaluation of LLMs for DILI-focused patient education. High-performing, publicly accessible LLMs demonstrate the potential to deliver accurate, comprehensive, and safe health information, even surpassing physician responses. Impact and implications: DILI is a complex and multidisciplinary condition where patient understanding has a crucial role in management outcomes, yet educational resources remain scarce. By systematically evaluating six widely used LLMs, including both open- and closed-source models, this study provides new insights into the potential of artificial intelligence tools to enhance patient education and supplement clinical communication in hepatology. These findings are particularly important for physicians, patient educators, and healthcare policymakers seeking scalable and reliable strategies to support liver disease management. Although further refinement and clinical oversight are necessary to ensure content safety and accuracy, integrating LLM-based tools into patient education initiatives could offer a practical pathway to improve health literacy and engagement in real-world settings.

Ähnliche Arbeiten