OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 07.05.2026, 15:15

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Language Models Understand Themselves Better: A Zero-Shot AI-Generated Text Detection Method via Reading and Writing

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

7

Autoren

2025

Jahr

Abstract

The rapid development and widespread adoption of large language models (LLMs) in recent years have introduced significant risks, necessitating robust detection methods to distinguish between AI-generated content and human-written text. Traditional training-based approaches often lack flexibility and frequently make predictions without supporting evidence, especially when adapting to new domains, leading to a lack of interpretability. To address this issue, we propose a novel zero-shot detection framework named Reading and Writing detection method. Our approach utilizes an autoregressive model to assess the intrinsic complexity of text, while leveraging an autoencoder model to quantify the difficulty of reconstructing the text. By integrating these two metrics, we effectively highlight the substantial differences between machine-generated and human-written text. We conduct extensive experiments on four large public datasets from state-of-the-art LLMs, including GPT-3.5, GPT-4, and open-source models like LLaMa. The results demonstrate that our detection method shows tremendous potential across various language generation models and text domains.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationTopic ModelingText Readability and Simplification
Volltext beim Verlag öffnen