Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Evaluation of artificial intelligence-generated layperson's summaries from abstracts of vascular surgical scientific papers
6
Zitationen
9
Autoren
2024
Jahr
Abstract
Background: The study aimed to assess the efficacy of ChatGPT 3.5, an artificial intelligence (AI) language model, in generating readable and accurate layperson’s summaries from abstracts of vascular surgery studies.Materials and methodsAbstracts from four leading vascular surgery journals published between October 2023 and December 2023 were utilized. A ChatGPT prompt for developing layperson’s summaries was designed based on established methodology. Readability measures and grade-level assessments (RR-GLIs) were compared between original abstracts and ChatGPT-generated summaries. Two vascular surgeons evaluated a randomized sample of ChatGPT summaries for clarity and correctness. Readability scores of original abstracts were compared with ChatGPT-generated layperson’s summaries using a t-test. Moreover, a sub-analysis based on abstract topics was performed. Cohen’s kappa assessed interrater reliability for accuracy and clarity.ResultsOne-hundred and fifty papers were included in the database. Statistically significant differences were observed in RR-GLIs between original abstracts and AI-generated summaries, indicating improved readability in the latter (mean Global Readability Score of 36.6±13.8 in the original abstract and of 50.5±11.1 in the AI-generated summary, p<0.001). This trend persisted across abstract topics and journals. While one physician found all summaries correct, the other noted inaccuracies in 32% of cases, with mean rating scores of 4 and 4.7, respectively, and no inter-observer agreement (k value=-0.1).ConclusionsChatGPT demonstrates utility in producing patient-friendly summaries from scientific abstracts in vascular surgery, although the accuracy and quality of AI-generated summaries warrant further scrutiny.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.