Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Assessment of the Efficacy of the Google Gemini 2.5 Pro Model in Solving the Polish State Specialization Exam in Pediatric Surgery
0
Zitationen
15
Autoren
2025
Jahr
Abstract
Background AI language models such as Google Gemini, OpenAI ChatGPT, and Anthropic's Claude are developing rapidly in response to the growing demand from various sectors of daily life, science, and industry. By collecting and processing extensive datasets, including medical data, they are becoming increasingly popular tools supporting not only IT specialists and programmers but also students and resident physicians in their studies and preparation for examinations, including specialization exams. Consequently, the reliability and accuracy of the information provided by these tools, i.e., AI language models, are often questioned. This concern formed the basis of the present study, which verified the utility of the Google Gemini 2.5 Pro model using the Polish State Specialization Examination (PES) in Pediatric Surgery. Objective The objective of this study was to assess the effectiveness and confidence levels of the Gemini 2.5 Pro model in answering PES questions, thereby evaluating its potential educational utility in the specialized surgical field of pediatric surgery. Methods The study was conducted using the most recent official PES from the spring 2025 session in pediatric surgery. The exam consisted of 120 multiple-choice questions (five options each, one correct answer). Based on previously published studies and the nature of the questions used in the PES across various medical disciplines in Poland, the questions were divided into two categories: clinical and general (theoretical). Before conducting the test, the Gemini 2.5 Pro model was presented with the PES regulations and then introduced to the examination paper containing the questions in Polish. The correctness of the solved test was verified against the official answer key from the Center for Medical Examinations (CEM) in Łódź. Additionally, the AI model was instructed to rate its confidence in each answer on a five-point scale (from 1 = no confidence to 5 = full confidence). The data obtained were analyzed statistically using the chi-squared test and the Mann-Whitney U test. Results The Google Gemini 2.5 Pro model achieved 103 correct answers, corresponding to an overall effectiveness of 85.83%, which is well above the 60% passing threshold. For subgroup analysis, the questions were divided into clinical and general categories, with the model scoring 83% and 91% correct answers, respectively. This difference was not statistically significant (p = 0.417), and the effect size (Cohen's h = 0.19) indicated a small effect. Furthermore, the model's confidence ratings showed that correct answers were generally given with higher confidence, while incorrect ones were associated with lower confidence. This suggests a positive correlation between confidence and accuracy, particularly for general questions. However, due to limited data, the exact effect size of this relationship could not be determined. Conclusions Gemini 2.5 Pro's strong performance on the PES demonstrates the considerable potential of advanced AI models in supporting medical education, even in highly specialized fields such as pediatric surgery. The observed association between correctness and declared confidence may help users gauge the reliability of AI-generated responses. Nevertheless, high performance in an examination setting does not eliminate the need for verification and critical evaluation of AI-generated answers in real-world clinical and educational applications.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.