OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 11.05.2026, 03:36

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Ethical Use of Artificial Intelligence in Academic Journal Writing: A Systematic Review Analysis

2025·0 Zitationen·International journal of research and scientific innovationOpen Access
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2025

Jahr

Abstract

The rapid adoption of generative artificial intelligence (AI), particularly since ChatGPT's launch in December 2022, has transformed academic journal writing while introducing significant ethical challenges to scholarly publishing. This systematic literature review (SLR), adhering to PRISMA 2020 guidelines, examined 44 peer-reviewed studies (2021-2025) to comprehensively assess the ethical dimensions of AI-assisted academic writing. Analysis reveals that 96.7% of reviewed literature expresses substantial ethical concerns related to AI use, including plagiarism risks, loss of originality, authorship ambiguity, and AI-generated hallucinations. With 66.7% of studies focusing explicitly on generative AI and a sharp increase in publications in 2025, these findings confirm the urgent relevance of this issue. Key findings indicate a fundamental redefinition of academic authenticity in AI-mediated writing, alongside a critical gap between institutional policies and actual practices. Publisher analysis reveals that only 20-30% of major publishers maintain comprehensive AI policies, while 30-50% lack formal guidance, creating regulatory fragmentation. Technical detection safeguards remain inadequate, with real-world accuracy averaging 26% despite claimed 94-99% performance, and 60.9% of detection research employing fragmented methods. Notably, fewer than 26% of ethical recommendations are consistently implemented in practice, highlighting a persistent theory-practice gap. The review identifies eight critical research gaps requiring urgent attention: robust verification methods, psychological factors in ethical decision-making, discipline-specific guidelines, longitudinal impact assessment, global harmonization frameworks, faculty AI literacy, institutional sustainability, and assessment method adaptation. Recommendations converge on an integrated, principle-based approach emphasizing mandatory transparency and disclosure of AI use, sustained ethics education and AI literacy, adaptive discipline-specific frameworks, and meaningful human oversight. Rather than relying solely on detection-based enforcement, the literature advocates transparency-based approaches and ethical literacy as more effective long-term solutions for ensuring academic integrity in the AI era.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationAcademic integrity and plagiarismEthics and Social Impacts of AI
Volltext beim Verlag öffnen