Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Anchoring and Framing Effects in Modern LLMs: Comparative Analysis of Cognitive Biases in Language Models
0
Zitationen
4
Autoren
2025
Jahr
Abstract
Large Language Models (LLMs) increasingly influence real-world decisions in economic, policy, and social contexts. This paper systematically investigates the presence of two well-known human cognitive biases—anchoring and framing—in three state-of-the-art LLMs: Mistral-7B, LLaMA-3-8B, and DeepSeek-R1. Using a benchmark of 24 crafted prompt pairs designed to test anchoring and framing susceptibility across four key domains (consumer pricing, policy, climate, health), we measure bias using newly defined metrics: Anchoring Sensitivity Index (ASI) and Framing Divergence Score (FDS). Results reveal significant disparities: Mistral-7B demonstrates 42% higher anchoring than LLaMA-3-8B, while DeepSeek-R1 shows the most framing robustness. We propose adversarial fine-tuning and bias-aware prompt engineering as mitigation techniques, achieving a 58% reduction in anchoring bias and 49% in framing bias.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.786 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.331 Zit.
"Why Should I Trust You?"
2016 · 14.602 Zit.
Generative adversarial networks
2020 · 13.213 Zit.