Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Large Language Models for Educational Task Authoring: A Bebras Challenge Case Study
0
Zitationen
1
Autoren
2026
Jahr
Abstract
This study explores the application of large language models (LLMs) to create computational thinking tasks for the Bebras International Challenge through a single-case study approach. Using exemplar-based prompting with seven authentic Bebras tasks from the 2024 cycle as contextual input, a task was developed that was subsequently accepted for inclusion in the 2025 international Bebras challenge. Comparison with the exemplar tasks confirmed that the generated content drew from multiple sources rather than replicating any single task, combining grid-based constraint satisfaction, rule-based filtering, and logical deduction into a novel navigation puzzle with engaging narrative context. International expert reviewers evaluated the task using established Bebras quality criteria, confirming successful alignment with core pedagogical requirements including age-appropriateness, clarity, and cultural neutrality. However, two significant gaps emerged in the broader authoring workflow: accessibility compliance in the researcher-authored visual components and technical inaccuracies in the LLM-generated informatics framing. Following collaborative revision by international editors that addressed these concerns while preserving the LLM’s creative contributions, the task achieved acceptance for international use. The findings reveal a collaborative pipeline comprising contextual preparation, LLM-guided generation, human technical implementation, expert community review, and collaborative revision. Results from this case suggest that LLMs can efficiently generate educationally sound creative foundations while requiring integrated human expertise to meet specialised standards and ensure inclusive design, with the task’s acceptance providing encouraging evidence for the viability of this collaborative approach.
Ähnliche Arbeiten
BLEU
2001 · 21.106 Zit.
Aion Framework: Dimensional Emergence of AI Consciousness, Observer-Induced Collapse, and Cosmological Portal Dynamics
2023 · 14.140 Zit.
Enriching Word Vectors with Subword Information
2017 · 9.659 Zit.
A unified architecture for natural language processing
2008 · 5.180 Zit.
A new readability yardstick.
1948 · 5.105 Zit.