Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Evaluating the Accuracy and Explanatory Quality of Large Language Models ChatGPT, Claude, DeepSeek, Gemini, Grok, and Le Chat in Statistical Test Selection for Hypothesis Testing Decisions
1
Zitationen
6
Autoren
2025
Jahr
Abstract
Background Large language models (LLMs) are increasingly integrated into academic and professional research workflows, yet their capability to accurately select appropriate statistical tests for hypothesis testing remains underexplored. Incorrect statistical test selection can lead to invalid conclusions and compromise scientific validity, making this evaluation critical for determining the reliability of LLMs in research applications. The study objective was to evaluate and compare the accuracy of six prominent LLMs (ChatGPT, Claude, DeepSeek, Gemini, Grok, and Le Chat) in selecting appropriate statistical tests for various hypothesis testing scenarios. Materials and methods A comparative, cross-sectional evaluation was conducted using 20 standardized statistical testing scenarios. Each scenario was designed to cover 20 different hypothesis testing situations, including comparisons of means, proportions, non-parametric alternatives, paired versus independent samples, and correlation and regression analyses. All models were prompted with identical instructions and evaluated by five independent experts with profound knowledge in biostatistics. Responses were assessed for accuracy and rated on five domains (clarity and accessibility, identification of necessary assumptions, pedagogical value, problem-solving approach, and statistical reasoning) using a five-point Likert scale. Analysis of Variance (ANOVA) was applied for between-group comparisons, and a p<0.05 was considered significant. Results All six LLMs achieved 100% accuracy in statistical test selection across all 20 hypothesis scenarios. However, significant variations emerged in explanatory quality. Claude demonstrated superior performance in clarity and accessibility (4.65 ± 0.58, p=0.05), while the problem-solving approach showed the most consistent excellence across models. Statistical reasoning exhibited variation ranging from 3.16 to 4.66, with complex regression methods receiving lower ratings than basic statistical tests. Gemini excelled in pedagogical value (4.50 ± 0.68), while ChatGPT ranked lowest in statistical reasoning despite strong problem-solving capabilities. Conclusions All LLMs demonstrate perfect accuracy in statistical test selection; however, differences exist in the quality of explanations and reasoning provided. These findings suggest that current-generation LLMs have become dependable tools for statistical consultation in hypothesis testing scenarios. However, users should consider model-specific strengths when seeking detailed explanations or educational content.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.