Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Nonlinear transformation of probabilities by large language models
0
Zitationen
4
Autoren
2025
Jahr
Abstract
Large Language Models (LLMs) such as ChatGPT and Claude demonstrate impressive abilities to generate meaningful text and mimic human-like responses. While they undoubtedly can boost human performance, there is also the risk that uninstructed users rely on them for direct advice without critical distance. A case in point is advice on economic choice. Choice tasks often involve probabilistic outcomes. In these tasks, human choice has been demonstrated to diverge from rational systematically, that is, linear weighting of probabilities, and reveals an inverse S-shaped weighting pattern in description-based choice (i.e., overweighting of small probabilities and underweighting of large ones), and an S-shaped weighting pattern in experience-based choice. We investigate how LLMs’ choices transform probabilities in simple economic tasks involving a sure outcome and a simple lottery with two probabilistic outcomes. LLMs’ choices do most often not yield an inverse S-shaped probability weighting pattern; instead, they display distinct nonlinearity-in-probabilities. Some models exhibited risk-seeking behavior, others a strong recency bias, and those who are more accurate underweighted small and overweighted large probabilities, resembling weighting patterns of decisions from experience rather than from description. These findings raise concerns about the quality of the advice users would receive on economic choice from LLMs, highlighting the necessity of critically using LLMs in decision-making contexts. • Demonstrates that LLMs do not replicate human probability-weighting • Additional training of transformer models can introduce new nonlinear biases alignment • Raises concerns about LLMs used as unmonitored decision-making advisors • LLMs deviate from both normative theory and typical human behavior under risk • Underscores the need for rigorous evaluation of LLM-derived decision advice
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.287 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.140 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.534 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.450 Zit.