OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 14.03.2026, 04:28

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Fooling MOSS Detection with Pretrained Language Models

2022·8 Zitationen·arXiv (Cornell University)Open Access
Volltext beim Verlag öffnen

8

Zitationen

2

Autoren

2022

Jahr

Abstract

As artificial intelligence (AI) technologies become increasingly powerful and prominent in society, their misuse is a growing concern. In educational settings, AI technologies could be used by students to cheat on assignments and exams. In this paper we explore whether transformers can be used to solve introductory level programming assignments while bypassing commonly used AI tools to detect similarities between pieces of software. We find that a student using GPT-J [Wang and Komatsuzaki, 2021] can complete introductory level programming assignments without triggering suspicion from MOSS [Aiken, 2000], a widely used software similarity and plagiarism detection tool. This holds despite the fact that GPT-J was not trained on the problems in question and is not provided with any examples to work from. We further find that the code written by GPT-J is diverse in structure, lacking any particular tells that future plagiarism detection techniques may use to try to identify algorithmically generated code. We conclude with a discussion of the ethical and educational implications of large language models and directions for future research.

Ähnliche Arbeiten

Autoren

Themen

Artificial Intelligence in Healthcare and EducationAcademic integrity and plagiarismImbalanced Data Classification Techniques
Volltext beim Verlag öffnen