OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 23.03.2026, 21:23

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Cardiology-Chat: A Multi-LLMs Powered System for Cardiac Diagnostic Reasoning and Clinical Support

2026·0 Zitationen·IEEE Journal of Translational Engineering in Health and MedicineOpen Access
Volltext beim Verlag öffnen

0

Zitationen

6

Autoren

2026

Jahr

Abstract

Cardiovascular diseases are a leading global cause of death, but their accurate diagnosis remains challenging. While Large Language Models (LLMs) show promise in assisting disease diagnosis in general, their adoption in cardiology is hindered by three critical limitations: hallucination, inadequate domain-specific reasoning, and restricted knowledge coverage. To overcome these barriers, we developed Cardiology-Chat, an LLM-based system specifically tailored for cardiology. The system employs a three-step main reasoning framework: (1) parsing user queries with Llama 3.1 8B-instruct to extract key clinical information, (2) retrieving evidence from the knowledge base via Retrieval-augmented generation (RAG), and (3) generating diagnostic conclusions using the fine-tuned Llama model. Two critical components have been developed to support the system's functionality. The first is a specialized cardiovascular vector knowledge base, constructed from multiple data sources to enhance the RAG subsystem. The second is a Chain-of-Thought–augmented dataset designed to strengthen the LLM’s in-depth reasoning capabilities. In addition, multiple LLMs were adopted to mitigate the possible "self-consistency" bias. Experiments on public cardiology QA and real clinical cases demonstrated significant performance improvements, achieving 0.796 accuracy and 0.807 F1 respectively.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationAI in Service InteractionsMachine Learning in Healthcare
Volltext beim Verlag öffnen