Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Role of Chat Generative Pretrained Transformer and Artificial Intelligence in Scientific Manuscript Writing
0
Zitationen
4
Autoren
2023
Jahr
Abstract
Chat generative pretrained transformer (ChatGPT) is an artificial intelligence (AI) chat bot developed by open AI.[1] It was launched in November 2022. It runs on the mechanism of reinforcement learning from human feedback which includes both supervised learning and reinforcement learning. However, both approaches use human trainer to improve the performance. As implied by the name, the main function of ChatGPT is to mimic a human conversation. However, since its development, its uses have been explored into more details and it has been found to be a lot more versatile than previously thought. It can write and debug computer programs, compose music and student essays. It can also answer test questions and write poetry and songs. Its software apart from these also has the advantage that it attempts to reduce disingenuous responses. Since its development, the role of ChatGPT in scientific manuscript writing has been sought after. There is a report that ChatGPT was able to write an entire manuscript based on the inputs like heading and subheadings given by a human author.[2] Some studies have also listed ChatGPT as an author.[3] It is reported to help in paraphrasing difficult sentences, identifying spelling mistakes, grammatical errors correction, and drafts of outlines and abstracts generation based on the author’s full text. Ethically speaking, creating a whole article using ChatGPT seems a topic of debate as it involves some serious scientific misconduct. The whole idea behind scientific paper writing comes under question when these “artificial authors” are involved. Therefore, the ethical use of these AI models is a serious concern and a burning topic of discussion. Some countermeasures to detect the AI-generated text and digital watermarking are being rapidly developed to identify this “AI-Plagiarism.” Authors, scientists, and researchers worldwide should use the five principles of ethical intelligence - non maleficence, make things better, respect others, be fair, and care[4] while using AI-assisted chat bots. The saying, “garbage in, garbage out,” is worth remembering every time we use AI; it is as good as what goes into it. These AI generated outputs are also not flawless. They can be inconsistent and can have language bias. Therefore, the output must be critically reviewed for scientific accuracy, relevancy and most importantly plagiarism. Moreover, the use of these AI models should be mentioned in the manuscript for better transparency. In an era of technology and permeating misinformation, AI models have to be used responsibly with proper reporting till clearer guidelines for its ethical use are generated by the competent authorities globally. This is vital to promote and protect the uprightness and dependability of medical research and medical knowledge. These models can prove to be excellent tools for patient’s education, medical teaching, and research if used judiciously and ethically. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.