Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Utility of Chat GPT in Venous Education
1
Zitationen
10
Autoren
2024
Jahr
Abstract
Chat GPT is an artificial intelligence-powered language model that is being increasingly used in the medical setting. Although quick access to large amounts of information is promising for vascular surgical education, the quality and depth of information provided by the current Chat GPT model is not well-understood. We aimed to study the utility of Chat GPT in teaching medical students and vascular surgery residents about varicose veins. We hypothesized that Chat GPT can provide a basic overview to medical students’ and possibly residents’ education. We generated two learning documents using Chat GPT, one for medical students and one for residents. We asked Chat GPT 3.5 to produce a document for “varicose veins explained to a medical student” and another for “varicose veins explained to a vascular surgery resident.” We asked it to “include background, anatomy, pathophysiology, risk factors, clinical presentation, complications, diagnostic evaluation, and management.” Texts generated for students and residents were compared and reviewed by seven academic vascular surgeons practicing in a teaching hospital. Five-point Likert scales were used to rate the accuracy, completeness, complexity, and applicability of each text (Table I). Average values of each survey question were compared using Mann-Whitney U tests. Aside from increased use of more advanced medical terminology in residents’ texts, content was similar in the two texts. Overall, the scores were slightly higher for the text generated for the residents (average, 3.91) vs the students (average, 3.71) (Table II). All surgeons believed that information was accurate (average, 4.5), although more accurate for residents (average, 4.71) vs students (average, 4.29). Most surgeons believed that information was not advanced enough (average, 3.21), albeit slightly more advanced for residents (average, 3.29 vs 3.14 for students). Most surgeons were on the fence on whether they would use the text to teach medical students (average, 3.43) or residents (average, 3.57). Although Chat GPT offers promising prospects in venous education, the current Chat GPT is not up to standards when used for medical education. Although the information was accurate and concise, it was not advanced enough for medical education, and most surgeons were not very enthusiastic about using it to teach students or residents. Optimizing Chat GPT-generated searches and expanding its applicability to specialized education is subject to future development and research.Table ISurvey used for vascular surgeons’ assessment of chat GPT-generated textsMedical students’ textStrongly disagreeDisagreeNeither disagree nor agreeAgreeStrongly agreeInformation is accurate12345Information is complete12345Information is concise12345Information is advanced enough12345I would use this summary to teach medical students about varicose veins12345Residents’ textStrongly disagreeDisagreeNeither disagree nor agreeAgreeStrongly agreeInformation is accurate12345Information is complete12345Information is concise12345Information is advanced enough12345I would use this summary to teach vascular surgery residents about varicose veins12345 Open table in a new tab Table IISurvey results for vascular surgeons’ rating of medical students’ texts, residents’ texts, and overall scoresInformation is accurate (P = .209)Information is complete (P = .259)Information is concise (P = .710)Information is advanced enough (P = .902)I would use this summary to teach students/residents about varicose veins (P = .902)Overall scoreMedical students4.29 ± 0.493.57 ± 0.544.14 ± 0.383.14 ± 1.353.43 ± 1.133.71 ± 0.60Residents4.71 ± 0.494 ± 0.584 ± 03.29 ± 1.253.57 ± 0.793.91 ± 0.49Overall4.53.794.073.213.53.81Scores are reported as mean ± standard deviation. Open table in a new tab
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.287 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.140 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.534 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.450 Zit.