Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Hidden Threat in Plain Text: Attacking RAG Data Loaders
1
Zitationen
5
Autoren
2025
Jahr
Abstract
Large Language Models (LLMs) have transformed human–machine interaction since ChatGPT’s 2022 debut, with Retrieval-Augmented Generation (RAG) emerging as a key framework that enhances LLM outputs by integrating external knowledge. However, RAG’s reliance on ingesting external documents introduces new vulnerabilities. This paper exposes a critical security gap at the data loading stage, where malicious actors can stealthily corrupt RAG pipelines by exploiting document ingestion.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.310 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.369 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.289 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.234 Zit.
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
2015 · 18.483 Zit.