Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Can ChatGPT Match the Experts? A Feedback Comparison for Serious Game Development
7
Zitationen
5
Autoren
2024
Jahr
Abstract
This paper investigates the potential and validity of ChatGPT as a tool to generate meaningful input for the serious game design process. Baseline input was collected from game designers, students and teachers via surveys, individual interviews and group discussions inspired by a description of a simple educational drilling game and its context of use. In these mixed methods experiments, two recent large language models (ChatGPT 3.5 and 4.0) were prompted with the same description to validate findings with expert participants. In addition, the impact on the models’ suggestions from integrating the expert’s role (e.g., "Answer as if you were a teacher.", "game designer", or a "student") into the prompt was investigated. The findings of these comparative analyses show that the input from both human expert participants and large language models can produce overlapping input in some expert groups. However, experts put emphasis on different categories of input and produce unique viewpoints. This research opens the discussion on the trustworthiness of large language model generated input for serious game development.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.