Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Research on Application, Risks, and Regulation of Generative Artificial Intelligence technology in Field of Scientific Journal Publishing
0
Zitationen
10
Autoren
2025
Jahr
Abstract
The rapid advancement of generative artificial intelligence (AIGC) technology is reshaping the publishing ecosystem of scientific journals. While offering technological dividends for topic planning, content production, and dissemination optimization, it also introduces risks such as academic misconduct, data security breaches, and ethical violations. This study, grounded in the context of intelligent transformation in scientific journals, proposes a comprehensive analytical framework encompassing "in-depth application scenario exploration—quantitative risk assessment—collaborative governance implementation." Through literature analysis, case studies, and empirical research, it systematically constructs a three-level risk assessment matrix (technical/ethical/legal dimensions) for academic misconduct induced by generative AI and designs a multi-tiered collaboration mechanism (industry/institution/operational levels). Findings indicate that AI enhances publishing efficiency through applications like topic prediction and intelligent editing, yet necessitates guarding against risks such as algorithmic opacity, data leakage, and copyright disputes. By refining policies and regulations, strengthening technical safeguards, and optimizing industry collaboration mechanisms, balanced development of intelligent transformation can be promoted. Future research will validate the framework's effectiveness through "efficiency-quality-compliance" tri-dimensional quantitative metrics and cross-disciplinary pilot projects involving multiple journals. The AI application guidelines and risk prevention handbook developed in this study provide actionable guidance for frontline editorial practices, supporting high-quality development in the publishing industry.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.402 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.270 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.702 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.507 Zit.