Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Med-Chat: Tuning ChatGLM3-6B with Chinese Medical Dialogue
1
Zitationen
4
Autoren
2024
Jahr
Abstract
Large language models (LLMs) have demonstrated significant success across a range of natural language processing tasks in general-purpose domains. However, due to limited specialized knowledge, LLMs sometimes generate responses that inaccurately represent medical facts, commonly referred to as hallucinations. Hallucinations provide possible hazards while using LLM in a medical context, including the possibility of misdiagnosis. To address this issue, we utilize the publicly available medical dialogue dataset DISC-Med-SFT, fine-tune the general model ChatGLM3-6B employing the LoRA method, and construct a dedicated medical knowledge base to provide a reliable source of information. Additionally, cascade deduplication is applied to the model’s generated responses to further enhance accuracy and consistency. Experimental results indicate that, compared to the baseline model ChatGLM3-6B, Med-Chat shows notable improvements in both BLEU and F1 score metrics. The resulting Chinese medical LLM, Med-Chat, demonstrates the capability to produce accurate, coherent responses and possesses fundamental medical question-answering skills.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.102 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.468 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.