OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 14.03.2026, 08:08

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Decoding moral responses in AI: A quantitative analysis of large language models

2025·1 Zitationen·Computers in Human Behavior ReportsOpen Access
Volltext beim Verlag öffnen

1

Zitationen

6

Autoren

2025

Jahr

Abstract

Despite the proliferation of powerful large language models (LLMs), there remains a need for systematic, quantitative comparisons of their responses to moral dilemmas. While LLMs lack intrinsic capacities for moral evaluation, they can generate texts indistinguishable from human responses—a feature that raises serious moral consequences, particularly in advice-giving contexts. This study builds on advances in AI ethics by systematically comparing the moral response patterns of seven LLMs, including GPT-3, GPT-3.5, GPT-4, GPT-4.1, Claude 3.7 Sonnet, Grok 3, and Gemini 2.5 Pro. Each LLM was presented with a series of moral dilemmas, both personal and impersonal, under three conditions: no rule, a deontological preamble, and a utilitarian preamble. Human and AI-assisted coding were employed to categorize the models’ responses into distinct moral judgments. Logistic regression analyses revealed that LLMs produced patterns consistent with established human biases in moral dilemmas, tending toward more utilitarian moral judgments in impersonal (vs. personal) dilemmas. Despite similar utilitarian moral tendencies under “no rule” and “utilitarian” conditions in most LLMs, the models’ outputs varied significantly under deontological framing, except GPT-4 and Claude 3.7 Sonnet in personal dilemmas. These findings highlight the learning prowess, or “slow thinking,” of LLMs and thus potential ways AI models diverge from human response patterns in morally charged scenarios. Our findings and approach also advocate for the nascent field of “Artificial Intelligence Psychology,” a discipline poised to leverage psychological paradigms for a deeper understanding of AI’s outputs and limitations. This insight supports the responsible advancement and application of AI in society. • LLMs differentiate moral personal and impersonal dilemmas like humans: more utilitarian choices in non-emotional decisions • Across seven LLMs, GPT-4 and Claude 3.7 Sonnet remain deontological under utilitarian prompts • Default inclination towards utilitarianism hints that most AI systems bear inherent ethical systems without explicit guidance • Few-shot learning linked to slow-thinking; fast-thinking a blind spot in AI when emulating human morality and emotions • AI Psychology as an emerging field: psychological methods and frameworks essential to AI and AGI research and development

Ähnliche Arbeiten