Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AI Gender Bias in Moral Guidance: A Computational Content and Sentiment Analysis of ChatGPT, Gemini, Le Chat, and DeepSeek
0
Zitationen
2
Autoren
2025
Jahr
Abstract
As generative AI systems become increasingly integrated into everyday life, their influence over human decision-making and perception has grown rapidly. Tools like ChatGPT, Gemini, Le Chat, and DeepSeek are frequently used for advice, support, and information, often perceived as neutral and objective. However, these systems are not free from bias. This thesis explores the hidden value systems embedded in generative AI responses, particularly in contexts that involve moral reasoning and gendered expectations. The study critically examines how generative AI can subtly replicate and normalize discriminatory or culturally specific norms. We conclude by outlining the sociological risks posed by unchecked AI bias and we call for greater transparency and accountability in AI development.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.514 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.859 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.386 Zit.
Fairness through awareness
2012 · 3.269 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.183 Zit.