OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 30.03.2026, 06:37

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Source framing triggers systematic bias in large language models

2025·3 Zitationen·Science AdvancesOpen Access
Volltext beim Verlag öffnen

3

Zitationen

2

Autoren

2025

Jahr

Abstract

Large language models (LLMs) are increasingly used to evaluate text, raising urgent questions about whether their judgments are consistent, unbiased, and robust to framing effects. Here, we examine inter- and intramodel agreement across four state-of-the-art LLMs tasked with evaluating 4800 narrative statements on 24 different topics of social, political, and public health relevance, for a total of 192,000 assessments. We manipulate the disclosed source of each statement to assess how attribution to either another LLM or a human author of specified nationality affects evaluation outcomes. Different LLMs display a remarkably high degree of inter- and intramodel agreement across topics, but this alignment breaks down when source framing is introduced. Attributing statements to Chinese individuals systematically lowers agreement scores across all models and, in particular, for DeepSeek Reasoner. Our findings show that LLMs' own judgment of agreement with narrative statements exhibit systematic bias from framing effects, with substantial implications for the neutrality and fairness of LLM-mediated information systems.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Computational and Text Analysis MethodsArtificial Intelligence in Healthcare and EducationTopic Modeling
Volltext beim Verlag öffnen