Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Bridging the Gap Between Urological Research and Patient Understanding: The Role of Large Language Models in Automated Generation of Layperson’s Summaries
62
Zitationen
12
Autoren
2023
Jahr
Abstract
INTRODUCTION: This study assessed ChatGPT's ability to generate readable, accurate, and clear layperson summaries of urological studies, and compared the performance of ChatGPT-generated summaries with original abstracts and author-written patient summaries to determine its effectiveness as a potential solution for creating accessible medical literature for the public. METHODS: Articles from the top 5 ranked urology journals were selected. A ChatGPT prompt was developed following guidelines to maximize readability, accuracy, and clarity, minimizing variability. Readability scores and grade-level indicators were calculated for the ChatGPT summaries, original abstracts, and patient summaries. Two MD physicians independently rated the accuracy and clarity of the ChatGPT-generated layperson summaries. Statistical analyses were conducted to compare readability scores. Cohen's κ coefficient was used to assess interrater reliability for correctness and clarity evaluations. RESULTS: = .037). The correctness rate of ChatGPT outputs was >85% across all categories assessed, with interrater agreement (Cohen's κ) between 2 independent physician reviewers ranging from 0.76-0.95. CONCLUSIONS: ChatGPT can create accurate summaries of scientific abstracts for patients, with well-crafted prompts enhancing user-friendliness. Although the summaries are satisfactory, expert verification is necessary for improved accuracy.
Ähnliche Arbeiten
BLEU
2001 · 21.268 Zit.
Aion Framework: Dimensional Emergence of AI Consciousness, Observer-Induced Collapse, and Cosmological Portal Dynamics
2023 · 14.175 Zit.
Enriching Word Vectors with Subword Information
2017 · 9.701 Zit.
A unified architecture for natural language processing
2008 · 5.193 Zit.
A new readability yardstick.
1948 · 5.142 Zit.