OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 19.03.2026, 06:26

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Are Neural Language Models Good Plagiarists? A Benchmark for Neural Paraphrase Detection

2021·0 Zitationen·Zenodo (CERN European Organization for Nuclear Research)Open Access
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2021

Jahr

Abstract

Full-Text PDF<br> Title: Are Neural Language Models Good Plagiarists? A Benchmark for Neural Paraphrase Detection<br> Authors: Jan Philip Wahle, Terry Ruas, Norman Meuschke, and Bela Gipp<br> Contact email: wahle@uni-wuppertal.de; ruas@uni-wuppertal.de<br> Venue: JCDL<br> Year: 2021<br> ================================================================<br> <strong>Dataset Description:</strong> <em><strong>Training:</strong></em><br> 1,474,230 aligned paragraphs (98,282 original, 1,375,948 paraphrased with 3 models and 5 hyperparameter configurations each 98,282) extracted from 4,012 (English) Wikipedia articles. <em><strong>Testing:</strong></em><br> BERT-large (cased): <br> arXiv - Original - 20,966; Paraphrased - 20,966; <br> Theses - Original - 5,226; Paraphrased - 5,226;<br> Wikipedia - Original - 39,241; Paraphrased - 39,241;<br> <br> RoBERTa-large (cased): <br> arXiv - Original - 20,966; Paraphrased - 20,966; <br> Theses - Original - 5,226; Paraphrased - 5,226;<br> Wikipedia - Original - 39,241; Paraphrased - 39,241; Longformer-large (uncased): <br> arXiv - Original - 20,966; Paraphrased - 20,966; <br> Theses - Original - 5,226; Paraphrased - 5,226;<br> Wikipedia - Original - 39,241; Paraphrased - 39,241; ================================================================ Dataset Structure: <strong>[og]</strong> folder: original. The original documents are split by the data source with the following folders: <strong>[arxiv]</strong> <strong>[thesis]</strong> <strong>[wikipedia]</strong> <strong>[wikipedia_train]</strong> <strong>[`model_name`_mlm_prob_`probability`] (e.g., bert-large-cased_mlm_prob_0.15): </strong>contains all paraphrased examples using the model with name `model_name` and Masked Language Modeling probability `probability`.<br> Each paraphrase model/probability folder contains the corresponding paraphrased documents according to<strong> [of]</strong>: <strong>[arxiv]</strong> <strong>[thesis]</strong> <strong>[wikipedia]</strong> <strong>[wikipedia_train]</strong> hparams.yml hparams.yml contains the hyperparameters to reconstruct the dataset using the official repository. ================================================================ Files:<br> On the lowest folder level, each `.txt` file contains exactly one paragraph. The filename contains either "ORIG" for original, or "SPUN" for paraphrased. ================================================================ Code:<br> To avoid misuse of the code for constructing machine-paraphrased plagiarism, you must consent to our Terms and Conditions and send the signed version via mail to one of the contact addresses above to obtain access to our repository (see TermsAndConditions.pdf).

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Topic ModelingText Readability and SimplificationArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen