Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Ethical Use of Artificial Intelligence in Academic Journal Writing: A Systematic Review Analysis
0
Zitationen
3
Autoren
2025
Jahr
Abstract
The rapid adoption of generative artificial intelligence (AI), particularly since ChatGPT's launch in December 2022, has transformed academic journal writing while introducing significant ethical challenges to scholarly publishing. This systematic literature review (SLR), adhering to PRISMA 2020 guidelines, examined 44 peer-reviewed studies (2021-2025) to comprehensively assess the ethical dimensions of AI-assisted academic writing. Analysis reveals that 96.7% of reviewed literature expresses substantial ethical concerns related to AI use, including plagiarism risks, loss of originality, authorship ambiguity, and AI-generated hallucinations. With 66.7% of studies focusing explicitly on generative AI and a sharp increase in publications in 2025, these findings confirm the urgent relevance of this issue. Key findings indicate a fundamental redefinition of academic authenticity in AI-mediated writing, alongside a critical gap between institutional policies and actual practices. Publisher analysis reveals that only 20-30% of major publishers maintain comprehensive AI policies, while 30-50% lack formal guidance, creating regulatory fragmentation. Technical detection safeguards remain inadequate, with real-world accuracy averaging 26% despite claimed 94-99% performance, and 60.9% of detection research employing fragmented methods. Notably, fewer than 26% of ethical recommendations are consistently implemented in practice, highlighting a persistent theory-practice gap. The review identifies eight critical research gaps requiring urgent attention: robust verification methods, psychological factors in ethical decision-making, discipline-specific guidelines, longitudinal impact assessment, global harmonization frameworks, faculty AI literacy, institutional sustainability, and assessment method adaptation. Recommendations converge on an integrated, principle-based approach emphasizing mandatory transparency and disclosure of AI use, sustained ethics education and AI literacy, adaptive discipline-specific frameworks, and meaningful human oversight. Rather than relying solely on detection-based enforcement, the literature advocates transparency-based approaches and ethical literacy as more effective long-term solutions for ensuring academic integrity in the AI era.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.644 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.550 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.061 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.850 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.