Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Sense and sensibility of article submission platforms are needed regarding verification of AI use: a stakeholders’ perspective
0
Zitationen
2
Autoren
2025
Jahr
Abstract
Abstract The development of artificial intelligence (AI) tools that can potentially automate components of the research process is accelerating rapidly. For journal editors, the undeclared use of generative AI (GAI) or large language models (LLMs) like ChatGPT to generate academic writing is particularly concerning. In response to GAI, many journals have incorporated an AI declaration statement into their article submission platform (ASP). The utility of such declarations may be limited, given their lack of verifiability. While a paper’s acknowledgements and ethical declarations constitute the primary location where authors formally declare accountability for their work and commit to proper academic conduct, a journal’s ASP serves as a second tier of verification. The configurations of GAI/LLM declarations in this space have not yet been formally characterized or assessed. The ASPs of the 50 top-ranked medical journals, according to the 2023 SCImago Journal Rank, were investigated and details on their GAI/LLM declarations were compiled. Of the 50 journals, 47 used an ASP, but due to exclusions (e.g., invitation-only submissions), only 36 were analysed. All Elsevier/ Lancet journals included a mandatory DEI survey to complete registration, and only one journal had a mandatory ORCID requirement. Of the 36 ASPs analysed, only 13 (36%) had an AI-related clause, one specific to the use of ChatGPT. In contrast, among the instructions for authors (IFAs) of 49 of the journals, 44 (90%) had an AI-related clause. Drawing from the experience with these top-ranked medical journals, we advise that they—as well as other medical journals—ensure that important ethical clauses that appear in their IFAs also appear in their ASPs, so that there is congruency among ethical statements related to AI use. Regarding GAI/LLM use, the biggest challenge remaining for the publishing industry is how to confirm the veracity of statements made on ASPs.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.102 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.468 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.