OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 29.04.2026, 10:06

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Anchoring and Framing Effects in Modern LLMs: Comparative Analysis of Cognitive Biases in Language Models

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2025

Jahr

Abstract

Large Language Models (LLMs) increasingly influence real-world decisions in economic, policy, and social contexts. This paper systematically investigates the presence of two well-known human cognitive biases—anchoring and framing—in three state-of-the-art LLMs: Mistral-7B, LLaMA-3-8B, and DeepSeek-R1. Using a benchmark of 24 crafted prompt pairs designed to test anchoring and framing susceptibility across four key domains (consumer pricing, policy, climate, health), we measure bias using newly defined metrics: Anchoring Sensitivity Index (ASI) and Framing Divergence Score (FDS). Results reveal significant disparities: Mistral-7B demonstrates 42% higher anchoring than LLaMA-3-8B, while DeepSeek-R1 shows the most framing robustness. We propose adversarial fine-tuning and bias-aware prompt engineering as mitigation techniques, achieving a 58% reduction in anchoring bias and 49% in framing bias.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and EducationComputational and Text Analysis Methods
Volltext beim Verlag öffnen