Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Large Language Models for Cardiovascular Disease, Cancer, and Mental Disorders: A Review of Systematic Reviews
0
Zitationen
9
Autoren
2025
Jahr
Abstract
<b>Background/Objective:</b> The use of Large Language Models (LLMs) has recently gained significant interest from the research community toward the development and adoption of Generative Artificial Intelligence (GenAI) solutions for healthcare. The present work introduces the first meta-review (i.e., review of systematic reviews) in the field of LLMs for chronic diseases, focusing particularly on cardiovascular, cancer, and mental diseases, to identify their value in patient care, and challenges for their implementation and clinical application. <b>Methods:</b> A literature search in the bibliographic databases of PubMed and Scopus was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, to identify systematic reviews incorporating LLMs. The original studies included in the reviews were synthesized according to their target disease, specific application, LLMs used, data sources, accuracy, and key outcomes. <b>Results:</b> The literature search identified 5 systematic reviews respecting our inclusion and exclusion criteria, which examined 81 unique LLM-based solutions. The highest percentage of the solutions targeted mental disease (86%), followed by cancer (7%) and cardiovascular disease (6%), implying a large research focus in mental health. Generative Pre-trained Transformer (GPT)-family models were used most frequently (~55%), followed by Bidirectional Encoder Representations from Transformers (BERT) variants (~40%). Key application areas included depression detection and classification (38%), suicidal ideation detection (7%), question answering based on treatment guidelines and recommendations (7%), and emotion classification (5%). Study aims and designs were highly heterogeneous, and methodological quality was generally moderate with frequent risk-of-bias concerns. Reported performance varied widely across domains and datasets, and many evaluations relied on fictional vignettes or non-representative data, limiting generalisability. The most significant found challenges in the development and evaluation of LLMs include inconsistent accuracy, bias detection and mitigation, model transparency, data privacy, need for continual human oversight, ethical concerns and guidelines, as well as the design and conduction of high-quality studies. <b>Conclusions:</b> While LLMs show promise for screening, triage, decision support, and patient education-particularly in mental health-the current literature is descriptive and constrained by data, transparency, and safety gaps. We recommend prioritizing rigorous real-world evaluations, diverse benchmark datasets, bias-auditing, and governance frameworks before LLM clinical deployment and large adoption.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.