OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 15.03.2026, 12:34

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Rise of the robo-authors: Are chatbots threatening scientific integrity?

2023·0 Zitationen·Annals of Indian PsychiatryOpen Access
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2023

Jahr

Abstract

Sir, The advent of artificial intelligence (AI) chatbots has marked a thrilling development in recent technology, with numerous chatbots emerging, such as the widely recognized and popular ChatGPT. As a computer program rooted in large language models (LLMs),[1] ChatGPT can interact with humans by answering questions, drafting scientific manuscripts, letters and computer code, as well as solving problems in various fields such as math and physics. Moreover, AI may soon be capable of undertaking more intricate tasks, such as designing experiments or conducting peer reviews.[2] The ability of AI tools to reduce the time and effort required to compose and assess scientific papers is one of the most major benefits of employing these tools. By outsourcing certain processes, such as text formatting and organization, these tools allow researchers to focus on more important aspects of their work. Despite this, it is crucial that reviewers and editors maintain vigilance when using these tools to ensure the empirical integrity of the articles they evaluate. On the other hand, ChatGPT should not replace human discernment and experts need to assess its outputs before integrating them into crucial decision-making processes or applications. In addition, numerous ethical issues stem from employing such tools, including plagiarism and inaccuracy risks, as well as potential discrepancies in availability between affluent and less-developed nations, particularly if the software adopts a paid model. Authors must strive only to use these tools for scholarly work once they have undergone thorough testing and proven to be highly accurate. Furthermore, their utilization should be limited to specific tasks that maintain the integrity and originality of the authors’ work. Additionally, it is important for human experts to consistently oversee the output of these tools to ensure their reliability and validity. Accountability for the content produced by ChatGPT in scientific papers poses a challenge. Accompanying this issue is ethical dilemmas, medicolegal and copyright disputes, lack of creative thinking and reasoning, methodological biases, and content inaccuracies.[3] It is worth noting that currently, there is no established governing body or defined set of rules and boundaries regarding the extent to which AI can be utilized in scientific writing. The capabilities of ChatGPT underscore the mounting need for comprehensive AI author guidelines in academic publishing. Ethical concerns are plentiful when AI generates academic text, touching upon copyright, attribution, plagiarism, and authorship. It is important to note that chatbots cannot be considered authors, as they fail to meet the criteria for authorship due to their inability to comprehend the role of authors or assume responsibility for a paper. Chatbots also fail to meet the ICMJE authorship requirements,[4] specifically regarding the approval of the final version for publication and accepting responsibility for every aspect of the work. Furthermore, ChatGPT not only offers up-to-date reference articles but it also occasionally provides inaccurate references. Thus, it is essential to carefully consider the role of AI chatbots like ChatGPT in scientific research and ensure that their use does not undermine the integrity of the work or the ethical standards of scholarly publishing. The rise of LLM technology, including ChatGPT in health care, necessitates urgent guidelines and regulations to address potential misuse. Engaging stakeholders, considering ethical and legal issues, and encouraging a science-driven debate will ensure responsible use. Proper implementation may accelerate innovation, promote equity, and overcome language barriers while mitigating risks of misleading or fraudulent outcomes. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationCOVID-19 diagnosis using AIAI in Service Interactions
Volltext beim Verlag öffnen