OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 02.04.2026, 18:04

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

A Survey of Domain-specific Fine-tuned Large Language Models

2026·0 Zitationen·IEEE AccessOpen Access
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2026

Jahr

Abstract

The advancement of large language models (LLMs), such as GPT-3, BERT, and Llama, has introduced a new era in natural language processing (NLP) as they have demonstrated exceptional capabilities across diverse tasks. Fine-tuning these LLMs with domain-specific data has become a popular practice, particularly in domains like education, law, medicine, and software development. This process not only empowers LLMs with domain knowledge but also enhances their capabilities by addressing issues such as reliability and reducing the chance of hallucination. This survey examines a range of domain-specific fine-tuned models by highlighting their unique characteristics, enhancements in performance, and the challenges they address, including the demand for extensive computational resources. Furthermore, we explore the fine-tuning methodologies, including instruction tuning used in the development of these models, and ultimately, inform future research on the existing gaps and how to enhance the effective application of LLMs across diverse domains.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationTopic ModelingMachine Learning in Healthcare
Volltext beim Verlag öffnen