Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Review of Detecting Text generated by ChatGPT Using Machine and Deep-Learning Models: A Tools and Methods Analysis
2
Zitationen
3
Autoren
2025
Jahr
Abstract
Recently, generative models, such as ChatGPT, have gained considerable attention because of their capacity to generate text almost identical to that produced by humans. However, ChatGPT raises several concerns, particularly regarding the integrity of academic work, the protection of personal information and security, the reliance on artificial intelligence (AI), the evaluation of learning, and the precision of information. Distinguishing between writing generated by machines and text that humans wrote is one of the most critical issues at present. The purpose of this literature review is to provide a comprehensive, up-to-date analysis of the most recent methods for identifying text that ChatGPT created. It examines more than 60 academic papers, especially research articles published after the model’s release in 2022, and analyzes state-of-the-art machine learning, deep learning, and hybrid approaches for detecting AI-generated text. The review categorizes detection methods into statistical models, transformer-based architectures, perplexity-based techniques, and human-assisted evaluation. The findings indicate that deep learning models, particularly the Robustly Optimized BERT Pretraining Approach (RoBERTa) and Cross-lingual Language Model with RoBERTa Architecture, have high detection accuracy (up to 99%), whereas traditional statistical methods exhibit limitations in distinguishing complex AI-generated content. This work recommends the use of machine and deep learning techniques and human reviewers in ongoing efforts to distinguish between AI-generated and human-written text. However, given the increasing sophistication and complexity of models, such as ChatGPT, detection techniques have to be continuously improved and innovated to ensure reliability and maintain the integrity of content across various sectors.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.