Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Is there a place for chatbots in scholarly publishing?
1
Zitationen
2
Autoren
2023
Jahr
Abstract
It was fall 2008 and I was supervising a student in the predoctoral clinics at the school where I worked. The patient was taking Cymbalta and I asked the student what the implications were for the patient's dental care. Without hesitation, he pulled out his new iPhone and started searching. I moved him away from the patient and said “Excuse me! What do you think you're doing? You need to know this information.” He paused for a moment before he said something I have never forgotten: “Dr. Nalliah, I will always have this device on me. So why are you asking me to live without it?” Technology is not our enemy. It is a tool to enable us to pursue our interests and our careers better. In return, we must be careful not to become overly reliant on technology. Since that day in 2008, I hope I have grown as an educator and come to appreciate the wise words of my student. The inception of the first iPhone in 2007 was a pivotal moment and we feel dental education is at another critical juncture—the inception of chatbots built on large language models. Chatbots, such as ChatGPT (Open AI), have created excitement and alarm with their ability to assist with composing code, essays, or simple emails. Is this a disruptive force in academic publishing or a natural evolution in app development? In 2016, conversational chatbots became widespread in their use with Apple's Siri, Amazon's Alexa, and Google Assistant, along with a few others. All of the applications were perceived as amazing technology at the time, allowing us to check the weather and make appointments on our calendars, play music, read, write, and send emails, and control home automation. Of course, chatbots and natural language processing have a long history.1 So, what is new and what is different this time? The current form of natural language processing allows an artificial intelligence (AI) model for expanded human-like conversations and task assistance with composition in response to questions by using secondary research information found in its database. The generative AI models are trained on vast amounts of information from websites, books, news articles, and more and can write “original copy” from the information. For example, ChatGPT has an extensive database of information up to September 2021 to draw from with incredible speed. The tool is certainly useful for writing repetitive reports and other business tasks, but currently, it does not have a role in academic publishing. However, it may be a useful tool for concept development at the initial stages of writing—it can write an outline to get you started, it can double check your references, and it can even draft an entire manuscript with proper training. In addition, the chatbot could be helpful with copyediting a document. This could be especially helpful if North American English is not your native language. So, in this way you can think of it as a helpful tool, much like a sophisticated spelling and grammar checker. However, since it puts words together statistically, there is a chance that some of your creativity or meaning may be written out. There is a concern that chatbots could be used by bad actors to spread misinformation or for academic misconduct in the form of plagiarism. Moreover, these tools are only going to improve, but there is early evidence of AI creating confusion and making assumptions that contribute to misinformation.2 There are tools to identify the “watermark” of content written by an AI chatbot. These tools look at a sequence of words that are in specific order and identify it as likely written by AI. A document with greater perplexity or randomness of the text, is likely to be written by a human. The accuracy of these detection tools is questionable. To test the detection systems, we asked ChatGPT the following question: Explain why ChatGPT can be a real problem for professors and editors when it comes to plagiarism. ChatGPT can pose significant challenges for professors and editors when it comes to plagiarism due to its ability to generate human-like text. Here are a few reasons why: Sophisticated Text Generation: ChatGPT is an advanced language model that can produce coherent and contextually relevant responses. It has been trained on a vast amount of data, including books, articles, and websites, enabling it to generate text that appears to be original and well-written. Plausible Sentence Structures: ChatGPT can construct sentences with proper grammar, syntax, and logical flow, making it difficult to distinguish between original content and text generated by the model. It can mimic the writing style and voice of a specific author or adopt various writing tones, making it challenging to identify instances of plagiarism. Lack of Attribution: ChatGPT does not inherently possess the capability to provide citations or attribute sources to the information it generates. When students or writers use ChatGPT to generate content without acknowledging that it was produced by an AI model, the resulting text may lack proper references or citations, giving the impression of original work. Time-Effective Content Creation: As an AI, ChatGPT can rapidly generate large amounts of text on a wide range of topics. This efficiency may tempt users, such as students, to rely on the model for generating assignments or essays quickly. The ease and speed of content creation make it easier for individuals to submit plagiarized work without putting in the necessary effort or conducting proper research. Detection Challenges: Plagiarism detection tools often rely on patterns and databases of existing content to identify similarities. However, since ChatGPT can produce unique and previously unseen text, it may evade detection by such tools, especially if the generated content is not directly copied from existing sources. Given these factors, professors and editors face an uphill battle in identifying instances of plagiarism facilitated by ChatGPT. They must develop new strategies and tools to detect and prevent plagiarism effectively in an era where AI-powered language models can produce deceptive content. We then ran this text through the GPT-2 Output Detector content. The results were 99.98% real, although they were generated by AI. We then ran the same text through GPTZero, another detection software, and received the response: “Your text is likely to be written entirely by a human.” AI chatbot writing tools are here to stay, and the question arises: Should an AI chatbot be included as an author? Extraordinarily, there are already several scientific articles coauthored by ChatGPT. The first one was published in the Journal of Nurse Education Practice and is surrounded by controversy.3 This article is currently listed in PubMed with a corrigendum and without ChatGPT as an author. The Committee on Publication Ethics; Wiley, the Journal of Dental Education (JDE) publisher; and the JDE Editorial Board have weighed in on authorship and disclosure.4-6 Since AI tools cannot meet the requirements for authorship and take responsibility for the content of the work submitted, they should not be included as an author. Authors who use AI tools must be transparent in disclosing in the Materials and Methods (or similar section) of their paper how the AI tool was used, and which tool was used. If the tool was used for editing it should be disclosed in the acknowledgments. The chatbot industry is projected to increase dramatically due to chatbots’ potential as customer service tools. Health care is one of the leading areas to adopt AI and machine learning, along with chatbots. As we see AI tools expand to be ubiquitous in health care, there may also be a new segment of health care education and research for us to study with the help of our natural language processing databases.
Ähnliche Arbeiten
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
An Experiment in Linguistic Synthesis with a Fuzzy Logic Controller
1999 · 5.633 Zit.
An experiment in linguistic synthesis with a fuzzy logic controller
1975 · 5.587 Zit.
A FRAMEWORK FOR REPRESENTING KNOWLEDGE
1988 · 4.551 Zit.
Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy
2023 · 3.454 Zit.