OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 17.03.2026, 19:35

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Med-Chat: Tuning ChatGLM3-6B with Chinese Medical Dialogue

2024·1 Zitationen
Volltext beim Verlag öffnen

1

Zitationen

4

Autoren

2024

Jahr

Abstract

Large language models (LLMs) have demonstrated significant success across a range of natural language processing tasks in general-purpose domains. However, due to limited specialized knowledge, LLMs sometimes generate responses that inaccurately represent medical facts, commonly referred to as hallucinations. Hallucinations provide possible hazards while using LLM in a medical context, including the possibility of misdiagnosis. To address this issue, we utilize the publicly available medical dialogue dataset DISC-Med-SFT, fine-tune the general model ChatGLM3-6B employing the LoRA method, and construct a dedicated medical knowledge base to provide a reliable source of information. Additionally, cascade deduplication is applied to the model’s generated responses to further enhance accuracy and consistency. Experimental results indicate that, compared to the baseline model ChatGLM3-6B, Med-Chat shows notable improvements in both BLEU and F1 score metrics. The resulting Chinese medical LLM, Med-Chat, demonstrates the capability to produce accurate, coherent responses and possesses fundamental medical question-answering skills.

Ähnliche Arbeiten