Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Development of Large Language Models from Past to Present
0
Zitationen
2
Autoren
2025
Jahr
Abstract
LLMs (Large Language Models) are a groundbreaking technology for human‑computer interaction. LLMs, which are used in many natural language processing areas such as text generation, question-answer systems, translation and coding, show high success in complex language tasks thanks to their transformer architecture and self-attention mechanism. The development process that started with word embedding techniques has made significant progress with models such as BERT and GPT. LLMs trained with large datasets and powerful GPUs have also significantly improved grammar and context learning. Adapter-based fine-tuning methods increase the accessibility of models by reducing training costs. LLMs, which have revolutionized fields such as health, law, finance, education, and content production, can be integrated with different data types with multimodal models. LLMs have potential future uses in areas such as personalized education, autonomous systems and bio-artificial intelligence integration. However, at this point, challenges such as high computational costs and data quality should also be considered. In conclusion, LLMs have revolutionized the field of natural language processing due to their ability to understand and generate human-like language. Efficient algorithms and innovative solutions are needed in the development and dissemination of language models. In the future, it is expected that LLMs will have a wide range of applications and will be more used and visible in many fields. From a Management Information Systems perspective, the effective integration of LLMs into corporate processes is expected to play critical role in decision support, information management, and increasing overall managerial efficiency.
Ähnliche Arbeiten
Federated Learning: Challenges, Methods, and Future Directions
2020 · 4.398 Zit.
Deep Learning: Methods and Applications
2014 · 3.306 Zit.
Mobile Edge Computing: A Survey on Architecture and Computation Offloading
2017 · 2.900 Zit.
Machine Learning: An Artificial Intelligence Approach
2013 · 2.639 Zit.
Machine learning and deep learning
2021 · 2.335 Zit.