Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Leveraging Artificial Intelligence in Scholarly Publishing
1
Zitationen
3
Autoren
2025
Jahr
Abstract
The integration of Artificial Intelligence (AI) into scholarly publishing constitutes a structural transformation of historical significance, fundamentally reshaping how knowledge is produced, evaluated, and disseminated. This study presents a systematic analysis of AI adoption within the global research ecosystem, focusing on the critical period from 2021 to late 2025. Using a secondary data analysis framework, the paper examines the dual role of generative AI and large language models (LLMs) as both enablers of unprecedented efficiency and sources of emerging epistemic risk. Drawing on bibliometric evidence, industry reports, and peer-reviewed literature, the analysis reveals a rapid escalation in AI use among researchers, reaching 58% globally in 2025 compared to 37% in 2024. While the literature consistently demonstrates that AI substantially accelerates scholarly workflows—most notably in grant writing, literature synthesis, and preliminary review—it also exposes systemic vulnerabilities, including citation hallucination, opacity in reasoning, and erosion of academic integrity. These risks are compounded by the potential amplification of epistemic injustice, as AI systems trained on dominant linguistic and cultural corpora may marginalize non-Western and non-native English scholarship. The study is guided by two objectives: (i) to evaluate the operational efficacy of AI in streamlining research workflows and (ii) to assess the ethical and institutional implications of emergent “posthuman” authorship. Findings indicate that while AI-assisted tools can reduce grant preparation time by more than 90%, they simultaneously generate non-verifiable citations at rates that threaten the cumulative reliability of the scholarly record. Comparative analysis of detection tools and publisher policies further demonstrates that existing governance mechanisms are fragmented, biased, and insufficient for AI-scale knowledge production. The paper argues that academia is entering a posthuman phase of authorship in which human–machine collaboration destabilizes conventional notions of originality, accountability, and intellectual ownership. Without robust governance frameworks and a redefinition of scholarly integrity, the scientific record risks contamination by machine-generated simulacra of knowledge, undermining trust in research as a public good.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.231 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.084 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.444 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.423 Zit.