Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial intelligence in academic practices and policy discourses across ‘Big 5’ publishers
0
Zitationen
2
Autoren
2026
Jahr
Abstract
Abstract The present study investigates how the five largest academic publishers (Elsevier, Springer, Wiley, Taylor & Francis, and SAGE) are responding to the epistemic and procedural challenges posed by generative AI through formal policy frameworks. Situated within ongoing debates about the boundaries of authorship and the governance of AI-generated content, our research aims to critically assess the discursive and regulatory contours of publishers’ authorship guidelines (PGs). We employed a multi-method design that combines qualitative coding, semantic network analysis, and comparative matrix visualization to examine the official policy texts collected from each publisher’s website. Findings reveal a foundational consensus across all five publishers in prohibiting AI systems from being credited as authors and in mandating disclosure of AI usage. However, beyond this shared baseline, marked divergences emerge in the scope, specificity, and normative framing of AI policies. Co-occurrence and semantic analyses underline the centrality of ‘authorship’, ‘ethics’, and ‘accountability’ in AI discourse. Structural similarity measures further reveal alignment among Wiley, Elsevier, and Taylor & Francis, with Springer as a clear outlier. Our results point to an unsettled regulatory landscape where policies serve not only as instruments of governance but also as performative assertions of institutional identity and legitimacy. Consequently, the fragmented field of PG highlights the need for harmonized, inclusive, and enforceable frameworks that recognize both the potential and risks of AI in scholarly communication.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.100 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.466 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.