Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Halo Effect in Large Language Models: An Assessment Based on DeepSeek and ChatGPT-4
0
Zitationen
3
Autoren
2025
Jahr
Abstract
Large Language Models (LLMs), built on the Transformer architecture, have become the core type of generative language models. Unlike the unique cognitive and emotional capacities of humans, LLMs are trained on massive amounts of natural language data, enabling them to generate human-like responses. This study investigates whether LLMs exhibit halo effects similar to those observed in humans. We conducted experiments on two representative models: DeepSeek, developed by China’s DeepSeek Company, and ChatGPT-4, developed by OpenAI in the United States. The results indicate that both models demonstrate significant valence the halo effects and exhibit human-like reasoning tendencies when explaining their choices. However, in the social category halo effect, DeepSeek displayed more neutral evaluations compared to ChatGPT-4. These findings suggest that although LLMs generally replicate human cognitive biases, their specific manifestations vary across models, offering new insights into how artificial intelligence may approach human judgment through different pathways.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.460 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.341 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.791 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.536 Zit.