Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Best Practices for the Use of Generative Artificial Intelligence for Authors, Peer Reviewers, and Editors
1
Zitationen
1
Autoren
2023
Jahr
Abstract
Google defines artificial intelligence (AI) as “man-made systems that perform tasks which require human intelligence, such as decision-making, visual perception, speech recognition, and language translation.” Generative AI (generative AI or GenAI) is a self-learning AI algorithm capable of generating text, images, or other media. A large language model (LLM) is a GenAI model chatbot for deep learning of large data sets and is capable of undertaking various natural language processing tasks in response to user prompts.[1] In late 2022, the human-like interactivity and commendable language skills of a generative AI LLM chatbot (Chat Generative Pretrained Transformer [GPT]-3) caught everyone’s imagination, and the general public enthusiastically joined in the creative use of chatbots to generate textual outputs. Since then, there have been several debates and discussions on GenAI and its impact on various fields of human endeavor, including the scientific publication processes.[2] THE PRESENT CONCERNS OF GENERATIVE ARTIFICIAL INTELLIGENCE At present, for scientific publications, the use of LLM GenAI chatbot technology is fraught with uncertainty because a general-purpose LLM “deep learns” from all available open-source materials on the Internet; therefore, it is prone to acquire biased and incorrect information available on the net and incorporate it in the output. Research scholars have recently shown that generative AI could produce a completely fabricated scientific paper that appears authentic.[3,4] Hallucinations A major problem with a large language GenAI model is hallucination; it can generate fictitious information, which is then presented as an accurate fact. In January this year, I encountered a repetitive hallucination: while the chatbot generated an authentic-looking write-up to my serious scientific query, it supported the impressive text with 14 nonexistent citations in response to 3 requests for accurate references. A useful tip to research guides and teachers is that they could exploit this weakness of chatbots to quickly spot any AI-related plagiarism in work submitted by their students by merely verifying the existence of the citations.[5] Reproducibility GenAI models may yield inconsistent results as these are self-learning models and prone to varying the generated output. This violates an important principle of scientific research: reproducibility of findings by other studies. Validation It is not possible to fully validate the AI output because the self-generated algorithm is based on deep learning by the model, and, therefore, not verifiable. Data confidentiality This is an issue with many GenAI models as they use search prompts and other inputs for learning, recall, and reuse.[6] Stochastic generative artificial intelligence Another known shortcoming of using GenAI tools for serious scholarly work is that LLM GenAI models do not truly comprehend as human minds do but work like a “stochastic parrot,” i.e., generates output based on the probability of how various components of language combine, without any reference to the meaning.[7] COMMITTEE ON PUBLICATION ETHICS, WORLD ASSOCIATION OF MEDICAL EDITORS, AND GENERATIVE ARTIFICIAL INTELLIGENCE The Committee on Publication Ethics (COPE) has periodically discussed the development of AI tools. In February 2023, the COPE released its recommendations on the ethical use of GenAI. The ethical concerns center around three principles: accountability, transparency, and confidentiality, and involve three key players, namely, authors, peer reviewers, and editors.[8,9] In May 2023, the World Association of Medical Editors considered the pros and cons of GenAI LLM chatbots and released its recommendations on their use for scholarly publications;[10] the key recommendations are as follows: Chatbots cannot be authors Authors should be transparent and disclose if chatbots were used to generate scholarly work Authors are accountable for the accuracy and nonplagiarism of the chatbot-generated output incorporated in their paper Editors and peer reviewers should disclose to each other and to the authors any use of chatbots in the peer review and publication process Editors need appropriate tools to help them detect AI-generated or altered content. BEST PRACTICES FOR THE ETHICAL USE OF GENERATIVE ARTIFICIAL INTELLIGENCE TOOLS However, little known to the lay public, we have been using several AI-based applications and tools for well over three decades to help us with our academic research and publishing. A plethora of AI tools are available now, too many to list here. They have accelerated the research to discover new drugs and therapies and to guide accurate medical diagnoses. These generative AI tools could greatly contribute to the current best practices for effective research conduct and efficient scholarly publishing.[11–13] Identifying knowledge gap for further research The first laborious step in scientific research is to study all relevant literature, identify research gaps, and conceive a conceptual framework. AI-driven search engines like Powerdrill or Litmaps can analyze relevant literature to help identify any lacuna in the current knowledge and quickly comprehend variations among published papers. This could greatly help a researcher focus on a research project aimed at closing the current knowledge gap, thus enhancing the relevance and utility of the proposed study. Generative AI tools, such as Semantic Scholar, Consensus, and Elicit, can curate voluminous literature and quickly extract summaries of research studies relevant to the research proposal. Researchers can exploit the multilingual capacity of AI-based tools, like AskYourPDF, to summarize scholarly work and provide a clear window to relevant scientific publications in other languages. Writing literature review Generative AI tools such as Jenni and SciSpace could be used to quickly create a draft outline of literature review. Using their insightful views and opinions, the research team can modify and expand the draft. AI applications like MirrorThink can help ensure academic integrity by scrutinizing scholarly papers to verify assertions made therein. Collaborating in multicenter research projects AI tools for research collaboration and project management, for example, SciNote and ProofHub, facilitate time-bound management of tasks, efficient file sharing, effective communication, and collaborative creation of documents. In addition, the research team can decide on priorities, share responsibilities, create a dashboard to monitor progress, and set automated, timely reminders to enable progress before the deadlines. Data analysis and visualization GenAI tools such as Polimer, Julius, and advanced data analysis in GPT-4 can autonomously analyze and visualize research datasets to reveal patterns, which may offer insights to researchers, who can then elaborate further on the datasets while discussing their study results. The latest version of MS Excel too has incorporated generative AI to offer advanced data analysis and visualization. Creating reports and manuscripts AI-based editing tools such as Grammarly and ChatGPT can quickly create grammatically correct manuscripts. QuillBot, Trinka, and Wordvice AI incorporate language enhancement features to help one write a grammatically correct research document in a specific scientific tone and style and to suggest alternative words to paraphrase texts and avoid plagiarism. These are especially useful for researchers who are not native English speakers. Scientific publication process AI tools like Typeset.io can guide researchers to find relevant high-impact journals to submit their research work and track their progress through the publication process. Peer review AI tools are changing the way that research is evaluated during peer review. While it is certainly an important, time-tested quality control mechanism in scientific research, the peer review process can also be time-consuming and depends heavily on the availability and proficiency of reviewers. AI-powered tools can potentially streamline the peer review process, reduce bottlenecks therein, and improve overall accuracy. GenAI applications for peer review, like HeyScience, can generate constructive and meaningful feedback, thus making the manuscript evaluation process less dreary and more efficient for peer reviewers. For the editorial team, this could result in a faster turnaround through the peer review process.[14] Undoubtedly, these AI-based applications are here to stay as they have made scientific research and publishing less dreary, faster, and more efficient. CONCLUSION To sum up, researchers and academicians can adopt the current best practices to responsibly use various generative AI tools and be more efficient and effective in conducting research and scholarly publication. Meanwhile, scientific publishers must issue guidelines to curb the misuse of GenAI tools and adopt various strategies to ensure that the authenticity of published works is maintained in the era of generative AI.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.