Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Can artificial intelligence pass the written European Board of Hand Surgery exam?
1
Zitationen
9
Autoren
2025
Jahr
Abstract
Various artificial intelligence-based applications have emerged as transformative tools across numerous domains. Among these, ChatGPT has earned global recognition with its capacity for dynamic user interaction and holds significant potential in the medical sector. However, the subject-specific accuracy of ChatGPT remains a matter of debate. This study assesses the capabilities and knowledge of different artificial intelligence chatbots (ChatGPT, Google Gemini, and Claude) in the domain of hand surgery. Each chatbot conducted a full written EBHS exam. The test results were analyzed according to the EBHS-guidelines, focused on the total scores and the ratio of correct to incorrect responses for each artificial intelligence model. Findings revealed that three out of the four chatbots achieved passing scores on the exam. Notably, ChatGPT-4o1 demonstrated significantly superior performance. This study highlights the subject-specific expertise of different artificial intelligence programs within the specialized field of hand surgery while also underscoring their variability and limitations.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.