Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Enhancing Medical Language Understanding: Adapting LLMs to the Medical Domain through Hybrid Granularity Mask Learning
4
Zitationen
6
Autoren
2023
Jahr
Abstract
Large Language models have made remarkable strides in natural language understanding and generation. However, their performance in specialized fields like medicine often falls short due to the lack of domain-specific knowledge during pre-training. While fine-tuning on labeled medical data is a common approach for task adaptation, it may not capture the comprehensive medical knowledge required. In this paper, we proposed a Hybrid Granularity Mask Learning (HGM) method for domain adaptation in the medical field. Our method incorporates multi-level linguistic characteries including token, entity, and subsentence to enable the model to acquire medical knowledge comprehensively. We fine-tune a medical-specific language model derived from ChatGLM-6B and Bloom-7B on downstream medical tasks and evaluate its performance. The results demonstrate a significant improvement compared to the baseline, thus affirming the effectiveness of our proposed method.