Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A Survey of Domain-specific Fine-tuned Large Language Models
0
Zitationen
4
Autoren
2026
Jahr
Abstract
The advancement of large language models (LLMs), such as GPT-3, BERT, and Llama, has introduced a new era in natural language processing (NLP) as they have demonstrated exceptional capabilities across diverse tasks. Fine-tuning these LLMs with domain-specific data has become a popular practice, particularly in domains like education, law, medicine, and software development. This process not only empowers LLMs with domain knowledge but also enhances their capabilities by addressing issues such as reliability and reducing the chance of hallucination. This survey examines a range of domain-specific fine-tuned models by highlighting their unique characteristics, enhancements in performance, and the challenges they address, including the demand for extensive computational resources. Furthermore, we explore the fine-tuning methodologies, including instruction tuning used in the development of these models, and ultimately, inform future research on the existing gaps and how to enhance the effective application of LLMs across diverse domains.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.357 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.221 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.640 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.482 Zit.