OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 18.03.2026, 14:50

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

How do LLMs perform in the context of MCQs across different levels of thinking skills in a business education course at higher education? A comparison of ChatGPT, Gemini, and Copilot

2025·1 Zitationen·Computers and Education Artificial IntelligenceOpen Access
Volltext beim Verlag öffnen

1

Zitationen

5

Autoren

2025

Jahr

Abstract

This exploratory study investigates the performance of three widely-used and freely available large language models (LLMs)— ChatGPT (GPT-3.5 Turbo), Gemini, and Copilot—in answering multiple-choice questions (MCQs) categorized by cognitive complexity based on the revised Bloom's Taxonomy. Although MCQs offer a structured, efficient, and scalable method for evaluation, a gap exists in the literature on how LLMs handle varying cognitive levels, particularly in a business course at higher education. Understanding LLM performance in this context is crucial, as students increasingly use LLMs for searching answers but also to receive tailored and scaffolded responses for an interactive and personalized learning experience. Using 100 MCQs on the Business Intelligence & Data Analytics chapter from this course—classified into Lower-Order Thinking Skills (LOTS) and Higher-Order Thinking Skills (HOTS)—LLMs' accuracy scores were compared across these categories as well as Bloom's Subcategories. Findings from Generalized Linear Mixed Models reveal that three LLMs perform better on LOTS questions than HOTS. While descriptive trends in performance were observed across models, these differences were not statistically significant. However, prompt engineering, specifically FS-CD, can enhance LLM performance on complex tasks, and its effectiveness varies by model and cognitive level, significantly improving performance on HOTS for all LLMs, particularly for Apply-level tasks. These insights have potential implications for higher education, especially in business education. These findings underscore the necessity for a critical approach when using LLMs as learning tools. Educators can leverage these insights to guide students in the effective use of LLMs for mastering complex concepts and skills.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationAcademic integrity and plagiarismClinical Reasoning and Diagnostic Skills
Volltext beim Verlag öffnen