Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Our Words and the Words of Artificial Intelligence: The Accountability Belongs to Us
2
Zitationen
2
Autoren
2023
Jahr
Abstract
What we say and the words we put together make a difference in the lives of individuals, families, and communities who are now, or could be in the future, affected by cancer. We are uniquely and fully accountable for our spoken and written words. As authors, we are the ones who must stand by our written words or move carefully and swiftly to retract or revise our words if an error occurs. Catching an error before it impacts others is difficult as words are repeated, even misquoted, by others. We know that erroneous information creates public harm. Writing words to convey ideas or intentions effectively and clearly is very hard. As authors, we know that the hard work of writing may not result in the intended goal of publishing with the likelihood of the acceptance of an article in some journals being discouragingly low. Working hard to put words and ideas together for positive impact is admirable and needed. The impact of cancer on lives needs our words to be well placed and well thought out, and for this we are accountable. Currently, large language models (LLMs) based on artificial intelligence (AI) are being intensely reviewed by health care professionals and scientists for their possible fit in research, including the writing of articles. The LLMs, now widely available through tools such as ChatGPT, Jasper, Surfer, GrowthBar, Write Sonic, Peppertype, and Closers Copy, are examples of machine learning to generate text (https://renaissancerachel.com/best-ai-writing-tools/). They are content writing platforms trained on historical data. Some come with guarantees of quality. Some offer grammar checks, plagiarism review, and citations for the generated content. Others retrieve content from diverse sources but without attribution. Some are initially, although temporarily, available without cost. Most come with a fee; therefore, those that can pay will possibly receive a benefit not available to all, and a new equity issue thus being created. Some of the LLMs have human trainers to improve their abilities to produce accurate and sensical statements. In essence, they are writing tools that iteratively create content. Developers of these models acknowledge that the created content can be inaccurate. There are likely legitimate uses of these LLMs. Authors share with each other the uncomfortable experience of getting writer’s block, or getting stuck without the right words. The models have the skills to take words and build them as an essay or a letter. In another example described in a recent editorial for a special issue of a journal, the guest editors spoke to the benefit of using one of these models to help detect data patterns when working with large data sets as in computational methods or to predict cellular behavior in studies of cell regulation.1 These examples represent a legitimate, although limited, use of LLMs in science with well-defined and applied parameters. Please note, however, the recognized and essential need to link such methods with sound theory to truly extract insight from the output of these models applied to the large datasets. A human interpretation for insight and explanation is needed. Despite the legitimate use of the models, there are risks of abuse.2,3 Their use in science merits strong questioning. Recently, one of the LLMs, ChatGPT, was listed as a coauthor on a published editorial.4 Our principles regarding the use of LLMs in articles submitted to Cancer Care Research Online and her sister journal, Cancer Nursing, are clear: an AI authoring tool does not meet the standards of authorship as defined by the International Committee of Medical Journal Editors5; therefore, no LLM will be credited as an author. One of the criteria for being an author is to be fully accountable for all aspects of the content in the published article—able to speak to the roots of the research, its conduct, the interpretation of the data, and the implications of the results. No LLM can do this. No AI can defend the accuracy of its words. We as authors are accountable for these cognitive connections and interpretations. Certainly, if one of the LLMs is used in any way to assist with preparing the article, this must be noted in the Methods section. This is an acceptable recognition of a step carefully taken to organize the content in a way that helped to extract meaning, with the accuracy of the AI’s words then verified by the true authors before being placed alongside their own in the article. Because of the possible impact of these models in science, new wording is now being added to our Instructions for Authors guidelines. In agreement with the position of the Committee on Publication Ethics (COPE),6 this new wording is: “Authors who use AI tools in the writing of a manuscript, production of images or graphical elements of the paper, or in the collection and analysis of data, must be transparent in disclosing in the Materials and Methods (or similar section) of the paper how the AI tool was used, and which tool was used. Authors are fully responsible for the content of their manuscript, even those parts produced by an AI tool, and are thus liable for any breach of publication ethics.” The necessary training of LLMs is through historical data. Although science also builds upon past findings and previously established methods, science is very much about the future and how to make it better. In our case, we seek to use our efforts and our words to improve the lives of all affected by cancer. We at Cancer Care Research Online and her sister journal, Cancer Nursing, will continue to closely follow the evolution of the LLMs and their use in science and as needed, add to our principles for authorship. In the meantime, we salute authors who continue to work and work hard to be accountable for their written words.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.292 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.143 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.539 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.452 Zit.