Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Emergent Behavioural Signatures in Large Language Models: A Cross-Task Study of Risk and Forecasting Behaviour
0
Zitationen
4
Autoren
2025
Jahr
Abstract
<title>Abstract</title> Recent advancements in large language models (LLMs) such as GPT-4, LLaMA, and Qwen2.5 have revealed capabilities extending beyond language generation to include complex reasoning and decision-making. This paper investigates whether LLMs exhibit consistent behavioural tendencies—comparable to human personality traits—when placed in structured decision-making scenarios. We conduct a two-pronged empirical study using (i) the Balloon Analogue Risk Task (BART), a psychological tool for assessing risk propensity, and (ii) a time-series forecasting task involving real-world FMCG sales data. Across both tasks, four state-of-the-art LLMs demonstrated stable and distinct behavioural profiles: models that acted conservatively in BART also generated cautious sales forecasts, while risk-taking models projected more aggressive outcomes. These patterns persisted across multiple runs and prompt variations, providing strong evidence that the observed behaviours are not artifacts of prompt engineering but rather emergent dispositions rooted in model architecture and training data. This work establishes a foundation for behavioural modelling in AI, with implications for building task-aligned foundation models that reflect consistent decision-making styles.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.292 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.143 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.539 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.452 Zit.