OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 22.03.2026, 04:07

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Nonlinear transformation of probabilities by large language models

2025·0 Zitationen·Computers in Human Behavior Artificial HumansOpen Access
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2025

Jahr

Abstract

Large Language Models (LLMs) such as ChatGPT and Claude demonstrate impressive abilities to generate meaningful text and mimic human-like responses. While they undoubtedly can boost human performance, there is also the risk that uninstructed users rely on them for direct advice without critical distance. A case in point is advice on economic choice. Choice tasks often involve probabilistic outcomes. In these tasks, human choice has been demonstrated to diverge from rational systematically, that is, linear weighting of probabilities, and reveals an inverse S-shaped weighting pattern in description-based choice (i.e., overweighting of small probabilities and underweighting of large ones), and an S-shaped weighting pattern in experience-based choice. We investigate how LLMs’ choices transform probabilities in simple economic tasks involving a sure outcome and a simple lottery with two probabilistic outcomes. LLMs’ choices do most often not yield an inverse S-shaped probability weighting pattern; instead, they display distinct nonlinearity-in-probabilities. Some models exhibited risk-seeking behavior, others a strong recency bias, and those who are more accurate underweighted small and overweighted large probabilities, resembling weighting patterns of decisions from experience rather than from description. These findings raise concerns about the quality of the advice users would receive on economic choice from LLMs, highlighting the necessity of critically using LLMs in decision-making contexts. • Demonstrates that LLMs do not replicate human probability-weighting • Additional training of transformer models can introduce new nonlinear biases alignment • Raises concerns about LLMs used as unmonitored decision-making advisors • LLMs deviate from both normative theory and typical human behavior under risk • Underscores the need for rigorous evaluation of LLM-derived decision advice

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)Forecasting Techniques and Applications
Volltext beim Verlag öffnen