Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Small Language Models (SLMs)- Concepts, Advantages, Limitations and Applications
0
Zitationen
1
Autoren
2026
Jahr
Abstract
This article explores Small Language Models (SLMs) as an efficient and sustainable alternative to Large LanguageModels (LLMs). While LLMs such as GPT-4, Llama 3, and Google Gemini demonstrate impressive capabilities,their billions or trillions of parameters demand massive computational and energy resources. SLMs, withparameters ranging from a few million to a few billion, offer comparable performance in specific domains, withsignificant advantages: faster and more economical training, 5 to 10 times faster inference, 10 to 20 times greaterenergy efficiency, and the ability to run on edge devices. The article examines definitions, training and inferencecharacteristics, competitive advantages, and known limitations.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.534 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.423 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.917 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.582 Zit.