Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Evaluating the performance of generative artificial intelligence models in multidimensional data analysis tasks: a comparative study with large language models
0
Zitationen
2
Autoren
2026
Jahr
Abstract
Abstract This study presents a comparative performance evaluation of state-of-the-art generative artificial intelligence models in the context of data analysis. Eight large language models (Claude, Gemini, ChatGPT, Qwen, Grok, DeepSeek, LLaMA, and Mistral) were tested on 13 distinct analytical tasks derived from the Titanic dataset. Performance was assessed using a multidimensional scoring rubric consisting of five main categories—technical accuracy, analytical depth, machine learning application, presentation and communication, and originality—with a total of 14 sub-criteria. Each model’s output was rated on a five-point scale by independent evaluators. Results indicate that Claude and Gemini outperformed others, particularly in tasks requiring reasoning and transparency, while LLaMA and Mistral showed weaknesses in higher-order cognitive tasks. Overall, the findings provide theoretical insight into the cognitive capacities of generative artificial intelligence models in data-driven contexts and offer practical guidance for model selection in applied analytics.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.