OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 21.03.2026, 09:47

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Evaluating the performance of generative artificial intelligence models in multidimensional data analysis tasks: a comparative study with large language models

2026·0 Zitationen·Discover ComputingOpen Access
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2026

Jahr

Abstract

Abstract This study presents a comparative performance evaluation of state-of-the-art generative artificial intelligence models in the context of data analysis. Eight large language models (Claude, Gemini, ChatGPT, Qwen, Grok, DeepSeek, LLaMA, and Mistral) were tested on 13 distinct analytical tasks derived from the Titanic dataset. Performance was assessed using a multidimensional scoring rubric consisting of five main categories—technical accuracy, analytical depth, machine learning application, presentation and communication, and originality—with a total of 14 sub-criteria. Each model’s output was rated on a five-point scale by independent evaluators. Results indicate that Claude and Gemini outperformed others, particularly in tasks requiring reasoning and transparency, while LLaMA and Mistral showed weaknesses in higher-order cognitive tasks. Overall, the findings provide theoretical insight into the cognitive capacities of generative artificial intelligence models in data-driven contexts and offer practical guidance for model selection in applied analytics.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)Topic Modeling
Volltext beim Verlag öffnen