Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Comparative accuracy of ChatGPT-o1, DeepSeek R1, and Gemini 2.0 in answering general primary care questions
0
Zitationen
5
Autoren
2025
Jahr
Abstract
Abstract Objectives To evaluate and compare the accuracy and reliability of large language models (LLMs) ChatGPT-o1, DeepSeek R1, and Gemini 2.0 in answering general primary care medical questions, assessing their reasoning approaches and potential applications in medical education and clinical decision-making. Design A cross-sectional study using an automated evaluation process where three large language models (LLMs) answered a standardized set of multiple-choice medical questions. Setting From February 1, 2025 to February 15, 2025, the models were subjected to the test questions. For each model, each question was formulated in a new chat session. Questions were presented in Italian, with no additional instructions. Responses were compared to official test solutions. Participants Three LLMs were evaluated: ChatGPT-o1 (OpenAI), DeepSeek R1 (DeepSeek), and Gemini 2.0 flash thinking experimental model (Google). No human subjects or patient data were used. Intervention Each model received the same 100 multiple-choice questions and provided a single response per question without follow-up interactions. Scoring was based on correct answers (+1) and incorrect answers (0). Main Outcome Measures Accuracy was measured as the percentage of correct responses. Inter-model agreement was assessed through Cohen’s Kappa, and statistical significance was evaluated using McNemar’s test. Results ChatGPT-o1 achieved the highest accuracy (98%), followed by Gemini 2.0 (96%) and DeepSeek R1 (95%). Statistical analysis found no significant differences (p > 0.05) between the three models. Cohen’s Kappa indicated low agreement (ChatGPT-o1 vs. DeepSeek R1 = 0.2647; ChatGPT-o1 vs. Gemini 2.0 = 0.315), suggesting variations in reasoning. Conclusion LLMs exhibited high accuracy in answering primary care medical questions, highlighting their potential for medical education and clinical decision support in primary care. However, inconsistencies between models suggest that a multi-model or AI-assisted approach is preferable to relying on a single AI system. Future research should explore performance in real clinical cases and different medical specialties.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.100 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.466 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.