OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.03.2026, 05:04

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Comparing human and artificial intelligence in writing for health journals: an exploratory study

2023·18 ZitationenOpen Access
Volltext beim Verlag öffnen

18

Zitationen

5

Autoren

2023

Jahr

Abstract

ABSTRACT Aim and objectives The aim was to contribute to the editorial principles on the possible use of Artificial Intelligence (AI)- based tools for scientific writing. The objectives included Enlist the inclusion and exclusion criteria to test ChatGPT use in scientific writing Develop evaluation criteria to assess the quality of articles written by human authors and ChatGPT Compare prospectively written manuscripts by human authors and ChatGPT Design Prospective exploratory study Intervention Human authors and ChatGPT were asked to write short journal articles on three topics: 1) Promotion of early childhood development in Pakistan 2) Interventions to improve gender-responsive health services in low-and-middle-income countries, and 3) The pitfalls in risk communication for COVID-19. We content analyzed the articles using an evaluation matrix. Outcome measures The completeness, credibility, and scientific content of an article. Completeness meant that structure (IMRaD) and organization was maintained. Credibility required that others work is duly cited, with an accurate bibliography. Scientific content required specificity, data accuracy, cohesion, inclusivity, confidentiality, limitations, readability, and time efficiency. Results The articles by human authors scored better than ChatGPT in completeness and credibility. Similarly, human-written articles scored better for most of the items in scientific content except for time efficiency where ChatGPT scored better. The methods section was absent in ChatGPT articles, and a majority of references in its bibliography were unverifiable. Conclusions ChatGPT generates content that is believable but may not be true. The creators of this powerful model must step up and provide solutions to manage its glitches and potential misuse. In parallel, the academic departments, editors, and publishers must expect a growing utilization of ChatGPT and similar tools. Disallowing ChatGPT as a co-author may not be enough on their part. They must adapt the editorial policies, use measures to detect AI-based writing, and stop its likely implications for human health and life. STRENGTHS AND LIMITATIONS First study that examines the scientific writing of ChatGPT by comparing it with human-written articles. Explains how ChatGPT generates believable content that may not be true. Indicates that the creators of ChatGPT must step up to address its misuse and potential hazards. An initial exploration, based on limited data—larger studies are needed for generalizable conclusions.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationMeta-analysis and systematic reviews
Volltext beim Verlag öffnen