Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
ChatGPT as an Instructor’s Assistant for Generating and Scoring Exams
23
Zitationen
4
Autoren
2024
Jahr
Abstract
Generative intelligence technologies like ChatGPT hold significant promise across various sectors, particularly in education. This study assessed ChatGPT’s proficiency in responding to questions from University Entrance Exams typically administered to senior secondary students. Our findings indicate that ChatGPT version 4.0 consistently outperformed students, achieving higher average scores across exams from the past four years. However, it still committed errors in about 20% of its responses. Despite this, ChatGPT 4.0 demonstrated a robust capability to comprehend and produce natural language within a chemical context. Consequently, by applying diverse prompt engineering techniques, this AI was able to create short-answer questions and numerical problems that closely mimic the format and conceptual content of University Entrance Exams. We also confirmed that ChatGPT 4.0 could grade exams, showing a significant correlation with scores given by human evaluators but lower than that among human graders. This discrepancy and other practical considerations limit its application in grading exams.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.102 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.468 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.