Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Impact of AI-Generated Hallucinations in Educational Settings: Trends, Gaps, and Future Directions
0
Zitationen
3
Autoren
2025
Jahr
Abstract
The rapid integration of generative AI tools such as ChatGPT and Gemini into educational settings has transformed the way students access information, complete assignments, and engage with learning materials. These technologies offer efficiency and personalized support. However, they also pose risks, particularly in the form of ‘hallucinations,’ or factually incorrect content generated by AI. This study investigates how AI hallucinations affect student learning, trust in academic systems, and the ability to critically evaluate information. Using a systematic bibliometric approach, 193 peer-reviewed articles published between 2021 and 2026 were analyzed through Scopus, with visual network mapping performed in VOSviewer and trend analysis supported by Biblioshiny. The findings reveal that although interest in using AI and big data as a tool for knowledge discovery is growing, discussions around hallucinated content remain limited and fragmented. Co-occurrence analysis shows that terms like “ethics,” “misinformation,” and “trust” have only recently begun to gain traction, particularly after 2023. This paper identifies key trends, research gaps and future research including the absence of empirical evaluations of hallucination impact, the lack of detection frameworks, and insufficient AI literacy in curricula. It concludes with a call for future interdisciplinary efforts to develop robust safeguards, benchmark datasets, and policy frameworks to ensure the responsible and informed use of AI in learning environments.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.635 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.543 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.051 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.844 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.