OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 15.03.2026, 20:49

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Prosocial When Simple and Cold-Hearted When Complex: How Task Difficulty Shapes LLM Behavior

2025·0 Zitationen·Decision Analysis
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2025

Jahr

Abstract

Prior studies suggest that large language models (LLMs) act prosocially in simplified game-theoretic settings, but whether such behavior reflects stable objectives or context-driven patterns is unclear. We test whether LLMs exhibit fairness when choices follow complex tasks or take place in more complex decision environments. We hypothesize that problem complexity and mathematical prompts increase the LLM’s weight on prioritizing self-interests by activating responses geared toward calculation and rationality. We operationalize our theory using a quantal response framework and conducted a series of experiments with GPT-4, GPT-4o, and o3-mini as decision makers to test our hypotheses. In Study 1, models played Dictator and Ultimatum games following a series of unrelated problems that varied in context and difficulty. Study 2 was a sequential supply chain game that mirrors key aspects of the Ultimatum game regarding fairness concerns, but with added complexity. In Study 1, simple prompts produced nearly equal splits, because of fairness norms and preference for equity. However, complex math prompts invoked rational profit maximization logic to reduce allocation offers. In the pricing game, the models prioritized self-interested pricing but differed in decision execution. GPT-4 and GPT-4o selected lower prices because of random errors and heuristic responses rather than fairness concerns. In contrast, o3-mini consistently derived the profit-maximizing solution. Fairness in LLM responses is context sensitive and often suppressed by task characteristics that trigger goal-directed responses. Thus, researchers and developers must assess social preferences in more complex scenarios. Moreover, our research shows that utility-based models that incorporate bounded rationality and fairness capture core patterns in LLM behavior and yield testable predictions, supported by both choice data and model-generated text. History: This paper has been accepted for the Decision Analysis Special Issue on the Implications of Advances in Artificial Intelligence for Decision Analysis. Funding: The authors also acknowledge the financial support of UNSW Business School and the National Natural Science Foundation of China [Grant 72403226]. Supplemental Material: The online appendix is available at https://doi.org/10.1287/deca.2025.0396 .

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Ethics and Social Impacts of AIArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen