Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Performance of o1 pro and GPT-4 in self-assessment questions for nephrology board renewal
1
Zitationen
5
Autoren
2025
Jahr
Abstract
ABSTRACT Background Large language models (LLMs) are increasingly evaluated in medical education and clinical decision support, but their performance in highly specialized fields, such as nephrology, is not well established. We compared two advanced LLMs, GPT-4 and the newly released o1 pro, on comprehensive nephrology board renewal examinations. Methods We administered 209 Japanese Self-Assessment Questions for Nephrology Board Renewal from 2014–2023 to o1 pro and GPT-4 using ChatGPT pro. Each question, including images, was presented in separate chat sessions to prevent contextual carryover. Questions were classified by taxonomy (recall/interpretation/problem-solving), question type (general/clinical), image inclusion, and nephrology subspecialty. We calculated the proportion of correct answers and compared performances using chi-square or Fisher’s exact tests. Results Overall, o1 pro scored 81.3% (170/209), significantly higher than GPT-4’s 51.2% (107/209; p<0.001). o1 pro exceeded the 60% passing criterion every year, while GPT-4 achieved this in only two out of the ten years. Across taxonomy levels, question types, and the presence of images, o1 pro consistently outperformed GPT-4 (p<0.05 for multiple comparisons). Performance differences were also significant in several nephrology subspecialties, such as chronic kidney disease, confirming o1 pro’s broad superiority. Conclusion o1 pro substantially outperformed GPT-4 in a comprehensive nephrology board renewal examination, demonstrating advanced reasoning and integration of specialized knowledge. These findings highlight the potential of next-generation LLMs as valuable tools in specialty medical education and possibly clinical support in nephrology, warranting further and careful validation.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.316 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.177 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.575 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.468 Zit.