Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Performance of Retrieval-Augmented-Generation large language models in guideline-concordant PSA testing: A comparative study against junior clinicians (Preprint)
0
Zitationen
14
Autoren
2025
Jahr
Abstract
<sec> <title>BACKGROUND</title> Society guidelines for prostate cancer screening via PSA testing serve to standardize patient care, and are often utilized by trainees, junior staff, or generalist medical practitioners to guide medical decision-making. Adherence to guidelines is a time-consuming and challenging task and rates of inappropriate PSA testing are high. </sec> <sec> <title>OBJECTIVE</title> This study evaluates a retrieval-augmented generation (RAG) enhanced large language model (LLM), grounded in current EAU and AUA guidelines, to assess its effectiveness in providing guideline-concordant PSA screening recommendations compared to junior clinicians. </sec> <sec> <title>METHODS</title> A retrieval-augmented generation (RAG) pipeline was developed and used to process a series of 44 fictional case scenarios. Five junior clinicians were tasked to provide PSA testing recommendations for the same scenarios, in closed-book and open-book formats. Answers were compared for accuracy in a binomial fashion. </sec> <sec> <title>RESULTS</title> The RAG-LLM tool provided guideline-concordant recommendations in 95.5% of case scenarios, compared to junior clinicians, who were correct in 62.3% of scenarios in a closed-book format, and 74.1% of scenarios in an open book format. The difference was statistically significant for both closed-book (p <0.001) and open-book (p <0.001) formats. </sec> <sec> <title>CONCLUSIONS</title> Use of RAG techniques allows LLMs to integrate complex guidelines into day-to-day medical decision-making. RAG-LLM tools in Urology have the capability to enhance clinical decision-making by providing guideline-concordant recommendations for PSA testing, potentially improving the consistency of healthcare delivery, reducing cognitive load on clinicians, and reducing unnecessary investigations and costs. </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.231 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.084 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.444 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.423 Zit.