Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Exploring LLM-Based Generative Recommender Systems: Corpora, Customization, and Evaluation Insights
0
Zitationen
7
Autoren
2025
Jahr
Abstract
Abstract Large Language Model-Driven Generative Recommender Systems (LLM-GRSs) are increasingly transforming healthcare, particularly in question-answering systems. This study systematically reviewed their corpora sources, customization techniques, and evaluation metrics. A search of PubMed/MEDLINE, Embase, Scopus, and Web of Science identified 61 studies (2021–2024) using LLM-GRSs for medical information delivery. Corpus sources were categorized into real-world clinical resources (n = 24), literature materials (n = 34), open-source datasets (n = 33), and web-crawled data (n = 11), with 44 studies integrating multiple sources. Key model customization strategies included pre-training, prompt engineering, retrieval-augmented generation (RAG), fine-tuning, in-context learning, and offline learning. Fourteen studies used a single customization technique, while 41 studies combined these methods during model development. The evaluation metrics were classified into three main domains: 1) process metrics, 2) usability metrics, and 3) outcome metrics. The outcome metrics could also be divided into two categories: model-based outcomes and expert-assessed outcomes. The study identified critical gaps in corpus fairness, contributing to biases from geographic, cultural, and socio-economic factors. The reliance on unverified or unstructured data highlights the need for better integration of evidence-based clinical guidelines. Future research should focus on developing a tiered corpus architecture with vetted sources and dynamic weighting, while ensuring model transparency. Additionally, the lack of standardized evaluation frameworks for domain-specific models calls for comprehensive validation of LLM-GRSs in real-world healthcare settings.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.214 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.071 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.429 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.418 Zit.