Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Assessing ChatGPT Ability to Answer Common Patient Questions on Distal Biceps Ruptures
0
Zitationen
7
Autoren
2025
Jahr
Abstract
Background ChatGPT (OpenAI) is an artificial intelligence (AI) driven software that responds conversationally to written prompts. Given the rapid growth of ChatGPT and its accessibility, it is likely that patients may use this technology for medical education. The aim of this study was to determine if ChatGPT could appropriately respond to frequently asked questions (FAQ) pertaining to distal biceps rupture. Methods Ten questions were gathered from the ‘Frequently Asked Questions’ section of ten well-known health care institutions. The questions were then entered into ChatGPT 3.5 with no follow-up prompts. The responses were analyzed for accuracy, completeness, and readability using DISCERN, Journal of the American Medical Association (JAMA) Benchmark Criteria, and Flesch-Kincaid Grade Level scores. Results The average Flesch-Kincaid Grade Level of the ten responses was 14.9 (range: 11.87-17.44). Using the DISCERN score, one response was rated as “Very Poor,” seven responses were rated as “Poor,” and two responses were rated as “Fair.” The JAMA Benchmark criteria was zero for all responses. Conclusion The chatbot was able to answer commonly asked questions by patients pertaining to distal biceps rupture sufficiently, responses in a coherent manner that most patients would be able to understand. However, the high reading level may not be suitable for most patients, and the lack of documentation of the sources of information prevents readers from checking the factual content of the responses. For a comprehensive discussion of the treatment options and their implications, it remains essential for patients to have a consultation with a board-certified orthopedic surgeon with the appropriate training to address this injury.
Ähnliche Arbeiten
The Strengths and Difficulties Questionnaire: A Research Note
1997 · 14.533 Zit.
Making sense of Cronbach's alpha
2011 · 13.675 Zit.
QUADAS-2: A Revised Tool for the Quality Assessment of Diagnostic Accuracy Studies
2011 · 13.538 Zit.
A method for estimating the probability of adverse drug reactions
1981 · 11.450 Zit.
Evidence-Based Medicine
1992 · 4.133 Zit.