OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 19.03.2026, 16:31

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

AI Gender Bias in Moral Guidance: A Computational Content and Sentiment Analysis of ChatGPT, Gemini, Le Chat, and DeepSeek

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2025

Jahr

Abstract

As generative AI systems become increasingly integrated into everyday life, their influence over human decision-making and perception has grown rapidly. Tools like ChatGPT, Gemini, Le Chat, and DeepSeek are frequently used for advice, support, and information, often perceived as neutral and objective. However, these systems are not free from bias. This thesis explores the hidden value systems embedded in generative AI responses, particularly in contexts that involve moral reasoning and gendered expectations. The study critically examines how generative AI can subtly replicate and normalize discriminatory or culturally specific norms. We conclude by outlining the sociological risks posed by unchecked AI bias and we call for greater transparency and accountability in AI development.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AIArtificial Intelligence in Healthcare and EducationAI in Service Interactions
Volltext beim Verlag öffnen