Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A Comparative Study of Generative Artificial Intelligence Tools for Human Bone Learning
0
Zitationen
7
Autoren
2026
Jahr
Abstract
The aim of this study was to evaluate the effectiveness of three different generative AI tools, ChatGPT-4o, Claude 3.7 Sonnet, and Gemini 2.0 Flash, with a specific focus on their accuracy and response consistency in supporting self-directed learning in human skeletal anatomy. A total of 143 human skeletal specimens were selected for evaluation. Bone specimens from different donors were photographed to represent each structure, resulting in a total of 715 images. Four types of questions were generated to assess each AI model's ability to identify anatomical features. Responses were categorized into four groups: correct, incorrect, could not be specified, and not analyzable. For consistency assessment, 105 photographs were randomly selected from the total image set, and each was submitted to the three models independently on five separate occasions. The number of identical responses out of five was recorded for each model. ChatGPT-4o achieved the highest overall accuracy at 44.75%, significantly higher than the other two generative AI tools. Based on the coefficient scores calculated using Cohen's κ, the majority of outcomes demonstrated levels of agreement ranging from slight to fair across the three pairs of tools compared. Gemini 2.0 Flash was the only model that produced responses classified as not analyzable. It also achieved the highest proportion of identical responses across five trials, at 62.86%. Claude 3.7 Sonnet showed the highest proportion of inconsistent responses. These findings suggest that the generative AI models evaluated lack the reliability required for anatomy education and should be used with caution due to their high propensity to generate inaccurate information.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.400 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.261 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.695 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.506 Zit.