OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.04.2026, 23:11

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

The Halo Effect in Large Language Models: An Assessment Based on DeepSeek and ChatGPT-4

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2025

Jahr

Abstract

Large Language Models (LLMs), built on the Transformer architecture, have become the core type of generative language models. Unlike the unique cognitive and emotional capacities of humans, LLMs are trained on massive amounts of natural language data, enabling them to generate human-like responses. This study investigates whether LLMs exhibit halo effects similar to those observed in humans. We conducted experiments on two representative models: DeepSeek, developed by China’s DeepSeek Company, and ChatGPT-4, developed by OpenAI in the United States. The results indicate that both models demonstrate significant valence the halo effects and exhibit human-like reasoning tendencies when explaining their choices. However, in the social category halo effect, DeepSeek displayed more neutral evaluations compared to ChatGPT-4. These findings suggest that although LLMs generally replicate human cognitive biases, their specific manifestations vary across models, offering new insights into how artificial intelligence may approach human judgment through different pathways.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationComputational and Text Analysis MethodsExplainable Artificial Intelligence (XAI)
Volltext beim Verlag öffnen