OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 09.04.2026, 08:42

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

AI Hallucinations in Retrieval-Augmented and Generative Systems: A Rigorous Review of Definitions, Failure Mechanisms, Evaluation, and Mitigation Strategies

2026·0 Zitationen·EDRAAKOpen Access
Volltext beim Verlag öffnen

0

Zitationen

6

Autoren

2026

Jahr

Abstract

The research on AI hallucinations undergoes complete evaluation through this review which studies retrieval-augmented generation (RAG) and large language models (LLMs) and multimodal systems and their applications in healthcare and education and law and cybersecurity and business and tourism. The paper uses the bibliography from the source document as its review material to combine different definitions and show the reasons for hallucinations at both the model stage and the pipeline stage and to evaluate detection methods and assessment approaches and to develop a functional system for mitigation methods. Research shows that user trust along with interface behavior and anthropomorphic design and legal accountability and regulatory oversight now determine how hallucinated output systems impact actual operational systems. Three conclusions emerge. The research identifies hallucination as a group of different errors which produce unsupported content and weak grounding and context conflict and false citations and misleading confidence. The research identifies hallucination as a group of different errors which produce unsupported content and weak grounding and context conflict and false citations and misleading confidence. The RAG system implements security features which prevent particular failure modes but it generates new failure modes through its retrieval quality and evidence selection and grounding fidelity systems. The RAG system removes certain failure types but it produces new problems which impact both retrieval operations and evidence selection and grounding process accuracy. A stack which unites corpus control with retrieval validation and constrained generation and uncertainty quantification and human review stands as the most defensible solution for high-risk operational deployment. The most defensible approach for high-risk deployment requires a combination of corpus control and retrieval validation and constrained generation and uncertainty quantification and human review processes. The paper ends with three main recommendations which include operational metrics and research.

Ähnliche Arbeiten