Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Development and Evaluation of a Retrieval-Augmented Generation System for Radiology Guidelines
0
Zitationen
11
Autoren
2026
Jahr
Abstract
Large language models (LLMs) have demonstrated remarkable capabilities in processing and generating domain-specific information. Their application in clinical decision-making is, however, still limited by unreliability and outdated knowledge. In time-sensitive medical environments, such as radiology, rapid access to accurate and up-to-date guidelines is crucial for optimal patient outcomes. The European Society of Urogenital Radiology (ESUR) guidelines provide such diagnostic and therapeutic recommendations. However, manual lookup is often time-consuming and inefficient. To address these challenges, we developed a retrieval-augmented generation (RAG) system that grounds LLM responses in authoritative guideline content. The system extracts, indexes, and retrieves information using a headline-based chunking approach and the all-mpnet-base-v2 embedding model. We evaluated its performance against both a standalone LLM and an enhanced iterative RAG system using 79 queries, assessing retrieval accuracy, factual correctness, completeness, and clinical usefulness. Both RAG systems significantly outperformed the standalone LLM in all metrics, with the enhanced model achieving the highest scores: Factual accuracy (0.89 vs. 0.68), completeness (4.20 vs. 3.05 on a 5-point Likert scale), and usefulness (3.99 vs. 3.09 on a 5-point Likert scale). The enhanced RAG pipeline showed minor but statistically not significant improvements over the standard version in terms of factual accuracy and completeness. While our results are promising, opportunities remain to improve retrieval accuracy and reduce hallucinations. Future refinements, like domain-specific embeddings and advanced query expansion, may further improve reliability. These findings suggest that grounded RAG systems have significant potential to enhance clinical guideline accessibility but require further validation before clinical deployment.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.214 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.071 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.429 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.418 Zit.