OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 20.03.2026, 23:42

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Large Language Model-Driven Evaluation of Medical Records Using MedCheckLLM

2023·2 ZitationenOpen Access
Volltext beim Verlag öffnen

2

Zitationen

3

Autoren

2023

Jahr

Abstract

Abstract Large Language Models (LLMs) offer potential in healthcare, especially in the evaluation of medical documents. This research introduces MedCheckLLM, a multi-step framework designed for the systematic assessment of medical records against established evidence-based guidelines, a process termed ‘guideline-in-the-loop’. By keeping the guidelines separate from the LLM’s training data, this approach emphasizes validity, flexibility, and interpretability. Suggested evidence-based guidelines are externally accessed and fed back into the LLM for a evaluation. The method enables implementation of guideline updates and personalized protocols for specific patient groups without retraining. We applied MedCheckLLM to expert-validated simulated medical reports, focusing on headache diagnoses following International Headache Society guidelines. Findings revealed MedCheckLLM correctly extracted diagnoses, suggested appropriate guidelines, and accurately evaluated 87% of checklist items, with its evaluations aligning significantly with expert opinions. The system not only enhances healthcare quality assurance but also introduces a transparent and efficient means of applying LLMs in clinical settings. Future considerations must address privacy and ethical concerns in actual clinical scenarios.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationMachine Learning in HealthcareRadiomics and Machine Learning in Medical Imaging
Volltext beim Verlag öffnen