Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Plato’s Shadows in the Digital Cave: Controlling Cultural Bias in Generative AI
40
Zitationen
1
Autoren
2024
Jahr
Abstract
Generative Artificial Intelligence (AI) systems, like ChatGPT, have the potential to perpetuate and amplify cultural biases embedded in their training data, which are predominantly produced by dominant cultural groups. This paper explores the philosophical and technical challenges of detecting and mitigating cultural bias in generative AI, drawing on Plato’s Allegory of the Cave to frame the issue as a problem of limited and distorted representation. We propose a multifaceted approach combining technical interventions, such as data diversification and culturally aware model constraints, with a deeper engagement with the cultural and philosophical dimensions of the problem. Drawing on theories of extended cognition and situated knowledge, we argue that mitigating AI biases requires a reflexive interrogation of the cultural contexts of AI development and a commitment to empowering marginalized voices and perspectives. We claim that controlling cultural bias in generative AI is inseparable from the larger project of promoting equity, diversity, and inclusion in AI development and governance. By bridging philosophical reflection with technical innovation, this paper contributes to the growing discourse on responsible and inclusive AI, offering a roadmap for detecting and mitigating cultural biases while grappling with the profound cultural implications of these powerful technologies.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.504 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.856 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.377 Zit.
Fairness through awareness
2012 · 3.267 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.182 Zit.