Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Leveraging Large Language Models for High-Quality Lay Summaries: Efficacy of ChatGPT-4 with Custom Prompts in a Consecutive Series of Prostate Cancer Manuscripts
6
Zitationen
17
Autoren
2025
Jahr
Abstract
Clear and accessible lay summaries are essential for enhancing the public understanding of scientific knowledge. This study aimed to evaluate whether ChatGPT-4 can generate high-quality lay summaries that are both accurate and comprehensible for prostate cancer research in <i>Current Oncology</i>. To achieve this, it systematically assessed ChatGPT-4's ability to summarize 80 prostate cancer articles published in the journal between July 2022 and June 2024 using two distinct prompt designs: a basic "simple" prompt and an enhanced "extended" prompt. Readability was assessed using established metrics, including the Flesch-Kincaid Reading Ease (FKRE), while content quality was evaluated with a 5-point Likert scale for alignment with source material. The extended prompt demonstrated significantly higher readability (median FKRE: 40.9 vs. 29.1, <i>p</i> < 0.001), better alignment with quality thresholds (86.2% vs. 47.5%, <i>p</i> < 0.001), and reduced the required reading level, making content more accessible. Both prompt designs produced content with high comprehensiveness (median Likert score: 5). This study highlights the critical role of tailored prompt engineering in optimizing large language models (LLMs) for medical communication. Limitations include the exclusive focus on prostate cancer, the use of predefined prompts without iterative refinement, and the absence of a direct comparison with human-crafted summaries. These findings underscore the transformative potential of LLMs like ChatGPT-4 to streamline the creation of lay summaries, reduce researchers' workload, and enhance public engagement. Future research should explore prompt variability, incorporate patient feedback, and extend applications across broader medical domains.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.231 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.084 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.444 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.423 Zit.