Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A systematic review of the limitations of large language models in generating healthcare content
0
Zitationen
5
Autoren
2026
Jahr
Abstract
Large language models (LLMs) have recently gained prominence in healthcare content provision due to their numerous advantages. Despite these benefits, LLMs exhibit notable limitations in this domain. This study aimed to systematically identify the limitations of LLMs in provision of healthcare content. This study was a systematic review conducted in September 2025, including articles published in English between 2018 and 2025. Searches were performed in PubMed, Scopus, and the Cochrane Database of Systematic Reviews. Two independent evaluators screened the references and assessed quality of the selected studies using the Authority, Accuracy, Coverage, Objectivity, Date, and Significance (AACODS) checklist. Data were analyzed using Boyatzis's qualitative thematic approach with an inductive methodology, applying the input-process-output (IPO) model as the analytical framework. A total of 81 studies were included in the final analysis. The included studies were predominantly of high quality and demonstrated minimal risk of bias. The thematic analysis identified key themes: data limitations, dependence on input and prompt quality, accessibility issues, model design and architecture constraints, interaction challenges, response quality and comprehensiveness, and ethical, safety, and regulatory concerns. The study identified multiple limitations of LLMs in healthcare, with output issues being most common. In this regard, the most frequently cited limitation was the accuracy gap. However, these output issues were mainly resulted from flaws in input data, emphasizing the crucial role of input quality. The study also proposed strategies to address these challenges.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.593 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.483 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.003 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.824 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.