Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
LARGE LANGUAGE MODEL AND HALLUCINATIONS: A BIBLIOMETRIC REVIEW
0
Zitationen
8
Autoren
2025
Jahr
Abstract
The rapid advancement and widespread adoption of Large Language Models (LLMs) have spurred increasing interest in understanding their capabilities and limitations, particularly the phenomenon of "hallucination"—the generation of plausible yet factually incorrect information. This bibliometric review aims to map the scientific landscape and research trends surrounding LLMs and hallucinations within the broader context of Artificial Intelligence (AI). Despite the growing relevance of these issues, the scholarly discourse remains fragmented, necessitating a comprehensive synthesis of the existing literature. To address this gap, we conducted a systematic search using the keywords “LLM,” “hallucination,” and “AI” across the Scopus database. The resulting dataset, comprising 513 relevant publications, was cleaned and standardised using OpenRefine. Further analysis was conducted using Scopus Analyser to identify publication trends, citation patterns, and prolific contributors. Meanwhile, VOSviewer software was employed to construct co-authorship networks, keyword co-occurrence maps, and thematic clusters. The analysis revealed a marked increase in publications post-2020, with a significant concentration of research in computer science, linguistics, and ethics. Keyword mapping highlighted emerging themes such as factual consistency, trustworthiness, and prompt engineering. Co-authorship networks revealed a growing yet still loosely connected research community. These findings suggest that while interest in LLM hallucinations is rising, there is a need for deeper interdisciplinary collaboration and more rigorous evaluation frameworks. This study provides a foundational overview of the current research landscape and identifies critical directions for future investigation, especially in mitigating hallucinations and enhancing the reliability of LLM-generated content.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.