OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 30.03.2026, 05:42

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Prompt to Press: Evaluating Human Perception of AI Involvement in News Writing Across Prompt Specificity

2026·0 ZitationenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

6

Autoren

2026

Jahr

Abstract

Large language models (LLMs) are becoming a common feature in content creation tools, prompting important questions about how design choices influence user trust and engagement in AI-assisted journalism. Beyond output quality, factors such as prompt specificity, model choice, and authorship disclosure are themselves interaction design parameters that influence how users interpret and evaluate AI contributions. Yet, little is known about how these design decisions affect reader perceptions in journalistic contexts. To address this gap, we conducted an experiment with 150 participants who evaluated news articles on the sensitive topic of assisted suicide. The articles systematically varied in authorship (human-written, AI-edited, or AI-generated), stance (pro- or anti-legalization), and prompt specificity (vague, moderate, or highly detailed). Participants rated each article on engagement, subjectivity, and perceived AI involvement, and also provided open-ended justifications for their authorship judgments. Our findings show that prompt specificity and model choice significantly influence perceptions of authorship, underscoring how technical design decisions in AI tools can shape public trust in journalism.

Ähnliche Arbeiten