Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
WIP Post-Assessment Processes Given the Rise of Generative AI: Findings from the Literature
1
Zitationen
2
Autoren
2024
Jahr
Abstract
Free-to-use generative AI (GAI) threatens assessment integrity. The scholarship establishes that GAI can produce passing to highly sophisticated responses to a range of assessment items. And detecting AI-generated output is fraught. Human detection is spotty and there are no proven software solutions at the time of writing. Even where there are promising detection solutions, these are likely to become obsolete as GAI evolves. The challenge for instructors is reliably and consistently establishing authorship of student submissions. There are two main perspectives on academic cheating. Proactive approaches appeal to students' honor and precede submission. Whereas punitive strategies are employed after submission, intending to detect and punish dishonesty. This paper focuses on post-assessment regimes where academic integrity is checked after submission. This work-in-progress, research-to-practice paper collates post-assessment strategies from the literature for detecting unsanctioned use of GAI. Two sets of scholarship were synthesized to answer one question: What post-assessment strategies are there for detecting unsanctioned use of GAI? The first set of literature focusses on post-assessment strategies in general. The second set addresses post-assessment strategies given the proliferation of free-to-use GAI. The Education Resources Information Center (ERIC) database was searched for general post-assessment strategies, yielding five tools. This search of general, non-discipline specific scholarship returns instructors to practices that have been tried and tested before the advent of GAI. It is possible that they can form part of a larger strategy of reducing GAI misuse; but this requires further study. The second set of tools, extracted from scholarship published by IEEE Xplore, totals six. These are post-assessment tools specific to detecting GAI-generated text. The study reveals that as scholars discuss the nature of GAI misuse, there is need to problematize definitions of that misuse. There are several tools available to instructors for detecting dishonest content; no single tool is infallible. Instructors need to think not in terms of a one-off detection solution but a regimen of post-assessment checks to compensate for this fallibility of tools. And there is the need to test tools that work best given teaching and learning contexts.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.250 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.109 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.482 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.434 Zit.