Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
If an artificial intelligence chatbot wrote a scientific article, how would we know?
2
Zitationen
2
Autoren
2023
Jahr
Abstract
One of us (RW) is the Editor-in-Chief of Nurse Education in Practice and the other (SO) is author of an editorial in the journal1 which recently received a great deal of media attention following its citation in an article in Nature News.2 The editorial has made history by being the first article published in a reputable international journal using an open artificial intelligence (AI) tool, specifically the chatbot ChatGPT created by OpenAI (https://chatgp.com/; accessed 19 January 2023). The article also received attention due to the inclusion of ChatGPT as a co-author. This was an attempt at transparency on behalf of the first (the only) author to further the discussion on the potential impact of automated AI tools on scientific research and education which generated heated debate in the wider publishing industry and scientific community. The co-authorship had been overlooked by RW and, since editorials are not processed on the publishers Editorial Manager system it was not picked up necessitating a corrigendum. However, the publication of the editorial and the interest generated around it do point to the possibilities of AI being used to write scientific manuscripts and other types of communications.3 Such a development is almost inevitable and as authors, publishers, and editors we may push, Canute-style, against this tide but it is unlikely we will be unable to stop it. The issue, therefore, arises as to how we manage this and, specifically, the issues of whether or not we ban (if, indeed we could) the use of AI in writing scientific manuscripts; whether we have the capability to detect it, would it be detected by similarity checking software or are specific tools required; or whether we accommodate the inevitable and make provisions by acknowledging it in certain ways. After all, we say above that SO's editorial was the first. We cannot know that for certain. With the above introduction in mind, the purpose of this article is to consider the use of AI in writing manuscripts, to exemplify its use, limitations, and consequences, and to outline a strategy for managing the use of AI in writing manuscripts. Before proceeding, we will use the following definition of AI provided by IBM (https://www.ibm.com/topics/artificial-intelligence; accessed 19 January 2023): ‘AI leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.’ Clearly, the best written material produced by AI should be indistinguishable from material produced by the human mind. We have long been used to the use of electronic aids in our endeavours to write and publish academic articles. This does not, precisely, count as AI. Nevertheless, some of us can cast our minds back to hand searching—there was no alternative—the library shelves for reference material, conducting reviews using vast and copious volumes of the Science Citation Index, and submitting searches to librarians often to wait weeks for an outcome and then further weeks to retrieve the articles. That lengthy and laborious process is now, essentially, mimicked—often in a matter of minutes—using online search engines and access to full and downloadable copies of the manuscripts we require to inform our writing. In our writing, we use word-processing software packages such as Microsoft Word which prompts us to correct our spelling, rearrange our grammar, and to avoid words which may be considered offensive. If we are uncertain about where to submit our manuscript then we have online search engines such as JANE (Journal Author Name Estimator https://jane.biosemantics.org/; accessed 19 January 2023) into which we may type keywords or an abstract and be provided with a list of journals relevant to our topic. However, chatbots have the potential to elevate the use of AI in writing scientific articles to a new level. A chatbot can be applied beyond the location, assimilation, and organisation of raw material such as references to the location and assimilation of ideas—albeit pre-existing ideas on the Internet—and their organisation into coherent and convincing narratives. Naturally, caution is required as chatbots are only as good as the combination of questions they are asked, the large language model they are trained on, and the existence of relevant material on the Internet to analyse and synthesise.4 As described5 they do not always get their facts right. For that reason, anyone using a chatbot should also have some pre-existing expertise with AI tools and knowledge of the subject area. Beyond the assimilation of pre-existing information online, AI platforms can help authors to make sense of very complex material. One such AI platform is SciSpace (https://typeset.io/; accessed 19 January 2023) which has several functions including rapid retrieval of articles relevant to a topic—thus bypassing some of the tedium of searching by alternative methods—and also the ability to take a long and complex chunk of text and reduce it, with meaning, to a short and comprehensible summary. We envisage both the AI facilities of a chatbot such as ChatGPT and a platform such as SciSpace being used in combination to facilitate the writing of academic articles. What follows is a short demonstration of the capabilities of both these AI platforms. In providing this, we are not offering a judgement on the rectitude of using these platforms. While we will consider the limitations and risks outlined above, the complexity of these issues requires future discussion. To demonstrate the use of ChatGPT we asked the chatbot two questions, with the answers provided by ChatGPT below. ChatGPT is a language model that is trained on a large dataset of text from the internet. Its responses are based on patterns it has learned from that data. While it can generate human-like text, it may not always produce accurate information. It is important to fact-check any information provided by the model before using it. Additionally, as a language model, it does not have the ability to know or understand the information it is generating, it simply can generate text based on the patterns it has learned. ChatGPT is trained on a dataset of text from the internet. The specific sources of this text include a variety of websites, books, articles, and other forms of written content. However, it's important to note that the training data may not be from credible sources. Therefore, the information provided by the model might not be accurate, and it's always a good idea to fact-check the information before using it.an article that was generated by a machine would likely not meet these criteria and would be considered plagiarism. Additionally, the quality and accuracy of the information generated by a language model like ChatGPT may not meet the standards required for academic publishing. The integrated stress response (ISR) facilitates cellular adaptation to unfavorable conditions by reprogramming the cellular response. ISR activation was reported in neurological disorders and solid tumors; however, the function of ISR and its role as a possible therapeutic target in hematological malignancies still remain largely unexplored. Previously, we showed that the ISR is activated in chronic myeloid leukemia (CML) cells and correlates with blastic transformation and tyrosine kinase inhibitor (TKI) resistance. Moreover, the ISR was additionally activated in response to imatinib as a type of protective internal signaling. Here, we show that ISR inhibition combined with imatinib treatment sensitized and more effectively eradicated leukemic cells both in vitro and in vivo compared to treatment with single agents. The combined treatment specifically inhibited the STAT5 and RAS/RAF/MEK/ERK pathways, which are recognized as drivers of resistance. Mechanistically, this drug combination attenuated both interacting signaling networks, leading to BCR-ABL1- and ISR-dependent STAT5 activation. Consequently, leukemia engraftment in patient-derived xenograft mice bearing CD34+ TKI-resistant CML blasts carrying PTPN11 mutation responsible for hyperactivation of the RAS/ RAF/MAPK and JAK/STAT5 pathways was decreased upon double treatment. This correlated with the downregulation of genes related to the RAS/RAF/MAPK, JAK/STAT5 and stress response pathways and was associated with lower expression of STAT5-target genes regulating proliferation, viability and the stress response. Collectively, these findings highlight the effect of imatinib plus ISRIB in the eradication of leukemic cells resistant to TKIs and suggest potential clinical benefits for leukemia patients with TKI resistance related to RAS/RAF/MAPK or STAT5 signaling. We propose that personalized treatment based on the genetic selection of patients carrying mutations that cause overactivation of the targeted pathways and therefore make their sensitivity to such treatment probable should be considered as a possible future direction in leukemia treatment. The abstract of this research paper discusses how combining the inhibition of the integrated stress response (ISR) with imatinib treatment can be used to more effectively eradicate leukemic cells in chronic myeloid leukemia, and suggests that personalized treatments based on genetic selection may provide clinical benefits for patients resistant to tyrosine kinase inhibitors. Through simple conversation with the AI chatbot, the large language model that ChatGPT is based on, has the ability to quickly respond to questions, answer follow-up queries, accept some of its own shortcomings, and challenge assumptions that someone might make when interacting with it. These abilities may be useful to researchers who wish to learn about a new scientific topic or for those more experienced in a field who want to understanding emerging trends and keep abreast of novel developments which can inform their writing.7 Beyond this, the automated tool can rapidly generate paragraphs of well written text, albeit unreferenced and only in the English language, that could be used to supplement and enhance the writing produced by human authors. The obvious downside is that the source material that ChatGPT draws on is not transparent, making its accuracy difficult to verify and reference correctly. This may give rise to plagiarism if the ideas that the chatbot produces are not credited to the original authors, although it learns by assimilating a vast amount of online content reproducing concepts that are likely to have been discussed by many researchers over time. SciSpace may prove a more useful tool as it focuses on condensing the content of a single scientific article which may be highly technical and complex into one or two simple sentences that are easier to digest. It also suggests other relevant literature on the topic to read. However, a similar issue exists with validating the accuracy of this summation unless the researcher is an expert in the field, although it can be referenced. Nevertheless, these AI platforms are likely to be used to write scientific articles and will inevitably become more sophisticated over time, blurring the boundaries between human and automated scientific writing. For authors considering the use of these digital tools, an appreciation of their benefits and risks is needed before employing them to support writing. Like other electronic tools, clearly documenting how they are used when producing a manuscript, along with reporting their limitations, and acknowledging the contribution of the chatbot could improve transparency in the process. For editors, clearer guidelines on the use of AI platforms in scientific writing could enhance the publishing process. This may help us understand whether an AI chatbot wrote a scientific article or contributed in some way to its development, an important distinction that is emerging at the cutting-edge of scientific research. Roger Watson conceptualised the article and led the writing. Both authors revised and approved the draft manuscript. We would like to thank OpenAI who developed ChatGPT (https://chatgp.com/) and PubGenius Inc who developed SciSpace (https://typeset.io/) and made them freely available for use. The authors have no conflict of interest to declare. Roger Watson is Editor-in-Chief of Nurse Education in Practice and Dean of the School of Nursing, Southwest Medical University, China. Siobhan O’Connor is a Senior Lecturer at the University of Manchester, UK, teaching and undertaking research on technologies that impact nursing and healthcare. Data sharing is not applicable to this article as no new data were created or analyzed in this study.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.418 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.288 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.726 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.516 Zit.