OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 27.03.2026, 07:01

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

“Always Check Important Information!” - The Role of Disclaimers in the Perception of AI-generated Content

2025·2 ZitationenOpen Access
Volltext beim Verlag öffnen

2

Zitationen

2

Autoren

2025

Jahr

Abstract

Generative AI (genAI), and large language models (LLMs) in particular, have become a prevalent source of digital content. Despite their accessibility and rise as an information source, these models, however, often struggle with factual accuracy. Therefore, we explored in three experimental studies how disclaimers affect people’s perceptions of text and authorship in scientific information generated by AI. Additionally, we investigated the impact of information presentation and authorship attributions—whether content is authored solely by AI or co-authored with humans. Across the experiments, no effects of disclaimer type on text perceptions and only minor effects on authorship perceptions were found. In Study 1, an evaluative (vs. neutral) information presentation decreased credibility perceptions, while informing about AI’s strengths vs. limitations did not. In addition, participants attributed higher machine heuristic values to AI than to human authors. Study 2 revealed interaction effects between authorship attribution and disclaimer type, providing early insights into possible balancing effects of human-AI co-authorship. No difference between providing no vs. a basic disclaimer was found in Study 3. However, both strengths and limitations disclaimers induced higher credibility ratings. This research suggests that disclaimers alone do not affect the perception of AI-generated output. Greater efforts are needed to raise awareness of the capabilities and limitations of LLMs and to advocate for ethical practices in handling AI-generated content, especially regarding factual information.

Ähnliche Arbeiten

Autoren

Themen

Artificial Intelligence in Healthcare and EducationEthics and Social Impacts of AIMisinformation and Its Impacts
Volltext beim Verlag öffnen