Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Can ChatGPT do the same? ChatGPT and professional editors compared
0
Zitationen
4
Autoren
2026
Jahr
Abstract
Since the launch of ChatGPT, the use of and debate around generative AI has grown rapidly. Professionals whose work depends on writing have expressed concern about the potential impact of such tools on their roles. But are these concerns justified? Can ChatGPT truly take on the responsibilities of a professional writer? This study investigates that question by comparing the performance of ChatGPT with that of professional editors tasked with optimizing business communication. We conducted two studies, using both qualitative and quantitative methods. In the first, three experienced editors were asked to rewrite four business letters. Their editing processes were recorded using the Microsoft Snipping Tool, and immediately afterward, we conducted retrospective interviews using stimulated recall. These interviews were transcribed and analyzed. Insights from the observations and interviews informed the design of the prompt instructions used in the second study. In the second study, we asked ChatGPT to revise the same four letters using three different prompt types. The Simple prompt instructed the model to “make this text reader-focused.” The B1 prompt referred explicitly to the CEFR B1 language level, requiring ChatGPT to tailor the text for intermediate readers. Finally, the Process prompt simulated the editing steps observed in the professional editors’ workflows. To evaluate outcomes, we conducted both a qualitative comparison of the revised texts and a quantitative readability analysis using LiNT, a validated tool developed for Dutch texts. Our results show that the human editors substantially improved the readability of the original letters, reducing the use of unfamiliar words, shortening complex sentences, and increasing personal engagement through pronoun use. Among the AI outputs, ChatGPT B1 achieved results most comparable to the editors, both in readability and accuracy. In contrast, ChatGPT Simple fell short in terms of clarity and introduced errors through faulty inferences. Surprisingly, ChatGPT Process also underperformed compared to ChatGPT B1 and the human editors. Only the editors' and ChatGPT B1versions were free from errors. In the discussion, we reflect on how generative AI is reshaping the concept of writing within organizations, the skills required to produce effective written communication and the impact on writing pedagogy. Rather than replacing human editors, we argue that generative AI can play a valuable role as a collaborative tool in the organizational writing process.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.291 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.143 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.535 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.452 Zit.