Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Let's Do It Ourselves: Ensuring Academic Integrity in the Age of ChatGPT and Beyond
5
Zitationen
3
Autoren
2023
Jahr
Abstract
<p>This paper addresses the emerging challenge presented by large language models (LLMs) such as ChatGPT that are able to generate solutions to tasks traditionally used to enhance student's analytical and programming skills, particularly in programming education. This widespread availability of AI-generated solutions risks undermining the learning process and skill acquisition by enabling students to use AI generated solutions instead of practicing themselves. Addressing this challenge, our paper outlines a holistic strategy that combines educational initiatives, state-of-the-art plagiarism detection mechanisms, and an innovative steganography-based technique for watermarking AI-produced code. This multifaceted approach aims to provide evaluators with the tools to distinguish between a code generated by ChatGPT and a code genuinely created by students. With the collective efforts of educators, course administrators, and partnerships with AI developers, we believe it is feasible to uphold the integrity of programming education in this age of code-producing LLMs.</p>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.