Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Analyzing the Performance of ChatGPT About Osteoporosis
24
Zitationen
1
Autoren
2023
Jahr
Abstract
INTRODUCTION: This study evaluates the knowledge of ChatGPT about osteoporosis. METHODS: Osteoporosis-related frequently asked questions (FAQs) created by examining the websites frequently visited by patients, the official websites of hospitals, and social media. Questions based on these scientific data have been prepared in accordance with National Osteoporosis Guideline Group guides. Rater scored all ChatGPT answers between 1 and 4 (1 stated that the information was completely correct, 2 stated that the information was correct but insufficient, 3 stated that although some of the information was correct, there was incorrect information in the answer, and 4 stated that the answer consisted of completely incorrect information). The reproducibility of ChatGPT responses on osteoporosis was assessed by asking each question twice. The repeatability of the ChatGPT answer was considered as getting the same score twice. RESULTS: ChatGPT responded to 72 FAQs with an accuracy rate of 80.6%. The highest accuracy in ChatGPT's answers about osteoporosis was in the prevention category, 91.7%, and in the general knowledge category, 85.8%. Only 19 of the 31 (61.3%) questions prepared according to the National Osteoporosis Guideline Group guidelines were answered correctly by ChatGPT, and two answers (6.4%) were categorized as grade 4. The reproducibility rate of ChatGPT answers on 72 FAQs was 86.1% and the reproducibility rate of ChatGPT answers on National Osteoporosis Guideline Group guidelines was 83.9%. CONCLUSION: Present study outcomes for the first time showed that ChatGPT provided adequate answers to more than 80% of FAQs about osteoporosis. However, the accuracy of ChatGPT's answers to inquiries based on National Osteoporosis Guideline Group guidelines was decreased to 61.3%.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.557 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.447 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.944 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.797 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.