Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Who moved my text? Assessing whether ChatGPT can write better abstracts than humans
0
Zitationen
2
Autoren
2026
Jahr
Abstract
Purpose: This study investigates, from the readers’ perspective, whether ChatGPT can generate research abstracts that are perceived as more readable, comprehensive, trustworthy, and overall better than human-written versions. Design/methodology/approach: Through the lens of Cognitive Load and Processing Fluency theories, it explores how generative artificial intelligence (GenAI) tools may influence readers’ assessment of the textual characteristics of accounting and auditing scientific texts. Using a quasi-experimental method, it gathered fifteen qualitative papers from a leading accounting journal, for which their original abstracts were rewritten by ChatGPT in its version 4.0, based on a standardised prompt. Fifteen experienced researchers from Portugal and Brazil evaluated these two versions through a blinded process. The assessment also included content analysis by NVivo and readability indices. Findings: While ChatGPT-generated abstracts are generally preferred, particularly for readability and comprehensiveness, they do not consistently outperform human-written versions, as the messages’ perceived trustworthiness appears to support the respondents’ judgment. Research limitations/implications: The findings are primarily constrained by a small sample size from only two countries, a limited journal scope, and a single GenAI tool, which limits its generalizability. Besides, the lack of respondents’ familiarity with sources and the inclusion of solely expert researchers may have biased the trustworthiness perception. Practical implications: The findings have implications for future uses of GenAI in research dissemination and academic publishing. Originality/value: Despite its constraints, it offers novel insights into the potential applications of GenAI in academia, emphasising that content richness and precision should not be compromised by oversimplification, particularly in technical and scientific disciplines.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.