OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 18.03.2026, 08:25

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

A Survey of Retrieval-Augmented Generation (RAG) for Large Language Models

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

7

Autoren

2025

Jahr

Abstract

While Large Language Models (LLMs) are revolutionary, their deployment is constrained by inherent limitations such as factual hallucination and static knowledge. This survey systematically reviews Retrieval-Augmented Generation (RAG), a key paradigm for addressing these challenges by grounding LLMs in external, verifiable knowledge. To overcome the flaws of standalone models, RAG integrates LLMs with updatable knowledge bases, a hybrid approach that significantly enhances output accuracy and trustworthiness. Our primary finding is the technology’s clear evolutionary trajectory, which we structure into three stages: Naive, Advanced, and Modular RAG. This progression demonstrates a shift away from monolithic parametric memory towards intelligent systems that interact with external data. By summarizing the field’s progression, key challenges like retriever-generator alignment, and future directions such as integration with agentic architectures, this work concludes that RAG is a crucial technology for propelling AI to be more evidence-based and capable of complex reasoning.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Topic ModelingMultimodal Machine Learning ApplicationsArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen