Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Matters arising: authors of research papers must cautiously use ChatGPT for scientific writing
12
Zitationen
1
Autoren
2023
Jahr
Abstract
Chat Generative Pre-trained Transformer (ChatGPT) has demonstrated exceptional ability as an artificial intelligence (AI) language model in producing human-like language and replying to many types of questions, including those pertaining to research.Although ChatGPT has primarily been used for natural language processing and conversation, there is growing interest in its potential uses for scientific writing [1] .Creating high-quality texts is one of ChatGPT's main advantages while writing scientifically.ChatGPT can understand patterns and structures frequently found in scientific writing because it has been trained on a vast corpus of data, including scientific literature.This enables it to produce grammatically sound, coherent, and well-structured prose, which can help researchers save time and effort when writing [2] .However, during one of our research projects, we faced an important limitation of this AI technology which we report below:With the aid of ChatGPT, we sought to write the manuscript for our most recent project.Table 1 presents the exact conversation.When we tried to assess the validity of the statements generated by ChatGPT, all the statements regarding the potential interaction between coronavirus disease 2019 (COVID-19) and lowered brainderived neurotrophic factor (BDNF) levels [3] and the negative correlation between COVID-19 severity and BDNF were supported by the studies available in the literature [4] .The matter arose when we sought to retrieve the references to which the AI bot referred its statement to it.All three references provided by the bot were not existed in reality after searching their meta-data in the aforementioned journals.Furthermore, to validate this finding, we asked ChatGPT to provide the URLs for the references which have been used.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.100 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.466 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.