OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.03.2026, 07:49

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Benchmarking Large Language Models for Drug Combination Alerts: Achieving Expert-Level Reliability via Knowledge Grounding and Contextual Reasoning

2026·0 Zitationen·Journal of Medicinal ChemistryOpen Access
Volltext beim Verlag öffnen

0

Zitationen

7

Autoren

2026

Jahr

Abstract

Large language models (LLMs) have emerged as promising tools in the healthcare sector. However, their reliability in the critical task of identifying risky drug combinations remains unvalidated. Here, we systematically evaluated the potential of LLMs for drug combination alerting under the guidance of the CoMed framework through four aspects: (1) the baseline performance of native LLMs, (2) the contribution of external knowledge grounding via Retrieval-Augmented Generation (RAG), (3) the impact of expert-guided reasoning using context engineering, and (4) the utility of a multiagent architecture for comprehensive and interpretable risk analysis. Notably, by integrating RAG and the context engineering strategy, Qwen2.5-Max-CoT achieved outstanding performance (F1 = 0.971, AUC = 0.982), demonstrating expert-level balance between precision and recall. Furthermore, a case study on aspirin-warfarin validated CoMed's ability to generate accurate assessments in a structured and traceable HTML report. This study demonstrates that enhanced LLMs can reliably and transparently support drug combination risk alerting and clinical decision.

Ähnliche Arbeiten