OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 22.04.2026, 10:18

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Con: Artificial Intelligence in Manuscript Writing: Pitfalls and Ethical Concerns the Authors Should Be Aware

2025·2 Zitationen·Annals of Cardiac AnaesthesiaOpen Access
Volltext beim Verlag öffnen

2

Zitationen

1

Autoren

2025

Jahr

Abstract

The use of artificial intelligence (AI) in scientific writing is rapidly evolving. AI-based manuscript writing tools have significant pitfalls that authors must consider when preparing manuscripts for journal publication. Awareness of these potential challenges is crucial to ensuring AI’s practical and responsible use in scientific writing. Risk Of Plagiarism And Lack Of Originality: One of the significant pitfalls of AI-generated text is its potential to infringe upon existing work without proper attribution. AI tools can unintentionally plagiarize or produce text that closely resembles other published materials, even though they generate “new” text based on patterns. This raises concerns over intellectual property rights. An ethical challenge arises as AI-generated content, if not carefully vetted, may lead to unintentional plagiarism, as models are trained on existing literature, some of which might be proprietary or copyrighted.[1] Many journals and publishers have implemented AI-generated document-checking mechanisms for plagiarism. These tools indicate the percentage of text that is likely plagiarized or AI-generated. While they can identify potential plagiarism and AI involvement in text creation, false positives can occur. Prospective authors can bypass these hurdles by paraphrasing or citing the sources of the statements under scrutiny. Over-Reliance On AI And Erosion Of Writing Skills: This could be considered a significant disadvantage in using AI for manuscript writing. Relying too much on AI tools can weaken essential writing and critical thinking skills. Scientific writing involves synthesizing ideas, analyzing data, and presenting arguments coherently. Their skills may suffer if researchers rely heavily on AI tools to produce content. The simplicity and convenience of generating text through AI might discourage researchers from fully engaging in writing, which is crucial for developing and honing their analytical and writing abilities. Overdependence on AI could lead to a decline in the quality of scientific discussions, as the nuanced and critical thinking required for high-quality scientific work may be undermined. The erosion of these skills affects individual researchers and has broader implications for the scientific community, where rigorous debate and critical analysis are fundamental.[2] Studies have highlighted the importance of balancing the benefits of AI with the need to maintain research integrity and the skills required for scientific writing.[3,4] Efforts are being made to create policies and regulations to reduce the risks of AI tools affecting writing skills and research integrity.[4,5] Bias In AI Outputs: AI models can replicate and magnify biases found in the data they are trained on. When AI tools are employed to create literature reviews or condense research results, they might unintentionally perpetuate biases, mainly if the training data includes outdated or skewed viewpoints. AI models can inadvertently give preference to specific research topics or portray a biased perspective of literature based on the prevalence of particular topics in their dataset.[6] Unreliable Inaccurate Data Sources: The authors do not control the data sources used by AI platforms. AI may rely on poorly substantiated or weakly evidenced data, which is a significant shortcoming of relying on information provided by AI tools without adequate substantiation by the authors. Legal Implications of Statements Made Based on AI Tools: AI tools may produce content that could lead to legal issues. AI-generated statements might unintentionally infringe on intellectual property rights or misrepresent facts. This can result in legal actions against the authors, publishers, or journals. Therefore, it is essential for authors to meticulously review and verify AI-generated content to ensure it adheres to legal standards and ethical guidelines. Absence of innovative concepts: Relying too heavily on AI tools for manuscript writing may restrict the inclusion of creative and futuristic ideas. AI tools often generate content based on existing data and trends. Authors should balance their use of AI tools with their critical thinking and creativity to create research that expands the boundaries of current knowledge and explores future possibilities. Ethical Dilemmas In Authorship: Using AI in writing raises questions about authorship. If an AI system partially or wholly generates a manuscript, how should this be reflected in the authorship and contribution sections? Current ethical guidelines, such as those provided by the International Committee of Medical Journal Editors (ICMJE), are unclear on the extent to which AI-generated content warrants authorship credit. The introduction of AI challenges traditional definitions of authorship, which typically require intellectual contribution. Given the increasing influence of AI-generated content, there is an ongoing debate over whether AI should be acknowledged in authorship or contributorship.[7] The following is a demonstration paragraph generated by AI on the topic of “Anesthesia for Coronary Bypass Surgery” to highlight the pitfalls of AI tools in manuscript writing: Anesthesia for coronary bypass surgery is a critical component of patient management, with various techniques utilized to ensure safety and comfort. According to Smith et al. (2021), the use of propofol and fentanyl has become standard for inducing and maintaining anesthesia in most coronary surgeries. Recent studies also suggest that regional anesthesia, combined with general anesthesia, can reduce postoperative pain and improve recovery times (Jones and Williams, 2020). Furthermore, choosing anesthetic agents is pivotal in managing hemodynamic stability during surgery (Taylor, 2022). It is essential to monitor the patient’s blood pressure, heart rate, and oxygen saturation continuously to avoid complications such as myocardial ischemia or arrhythmias (Smith et al., 2021). This paragraph illustrates how plagiarism and unreliable, nonexistent citations can lead to misleading and unethical authorship. The statement, “It is essential to monitor the patient’s blood pressure, heart rate, and oxygen saturation continuously to avoid complications such as myocardial ischemia or arrhythmias,” was taken directly from a hypothetical source without citation. Including fabricated citations (Smith et al., 2021; Jones and Williams, 2020; Taylor, 2022) implies these references are used as if they are legitimate, although they correspond to no valid publications. Therefore, authors should exercise due diligence when utilizing the information provided by AI tools. CONCLUSION AI assistance in manuscript writing presents substantial pitfalls and ethical concerns that authors must consider. The risk of plagiarism and lack of originality, over-reliance on AI leading to the erosion of writing skills, inherent biases in AI outputs, and ethical dilemmas in authorship attribution are critical issues that need careful consideration. Authors must stay alert, utilizing AI to support their writing while maintaining the essential aspects of careful scientific discourse and performing due diligence. The author indicates that Grammarly was employed to prevent grammatical errors. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen