Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Responses From ChatGPT‐4 Show Limited Correlation With Expert Consensus Statement on Anterior Shoulder Instability
13
Zitationen
7
Autoren
2024
Jahr
Abstract
Purpose To compare the similarity of answers provided by Generative Pretrained Transformer‐4 (GPT‐4) with those of a consensus statement on diagnosis, nonoperative management, and Bankart repair in anterior shoulder instability (ASI). Methods An expert consensus statement on ASI published by Hurley et al. in 2022 was reviewed and questions laid out to the expert panel were extracted. GPT‐4, the subscription version of ChatGPT, was queried using the same set of questions. Answers provided by GPT‐4 were compared with those of the expert panel and subjectively rated for similarity by 2 experienced shoulder surgeons. GPT‐4 was then used to rate the similarity of its own responses to the consensus statement, classifying them as low, medium, or high. Rates of similarity as classified by the shoulder surgeons and GPT‐4 were then compared and interobserver reliability calculated using weighted κ scores. Results The degree of similarity between responses of GPT‐4 and the ASI consensus statement, as defined by shoulder surgeons, was high in 25.8%, medium in 45.2%, and low 29% of questions. GPT‐4 assessed similarity as high in 48.3%, medium in 41.9%, and low 9.7% of questions. Surgeons and GPT‐4 reached consensus on the classification of 18 questions (58.1%) and disagreement on 13 questions (41.9%). Conclusions The responses generated by artificial intelligence exhibit limited correlation with an expert statement on the diagnosis and treatment of ASI. Clinical Relevance As the use of artificial intelligence becomes more prevalent, it is important to understand how closely information resembles content produced by human authors.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.493 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.377 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.835 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.555 Zit.