Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Algorithmic Governance: Gender Bias in AI-Generated Policymaking?
2
Zitationen
2
Autoren
2025
Jahr
Abstract
Abstract Artificial Intelligence (AI) tools are becoming deeply embedded in everyday life and increasingly influence or automate decision-making processes that could shape not only public opinion but also policies. As their potential impact grows, it is essential to assess the inclusivity of the policy recommendations they could generate and potential biases they may reinforce. This study examines whether AI systems inherently consider gender in policy proposals, both when gender is explicitly mentioned in prompts and when it is not. We conduct four experiments across diverse policy-making contexts to evaluate whether AI-generated recommendations include, overlook, or misrepresent gender considerations. We tested these experiments in two different AI tools, namely ChatGPT (GPT-4) and Microsoft Copilot. To ensure neutrality and reproducibility, we minimize user-specific context and repeat each prompt multiple times. Our findings offer insights into the limitations of current AI tools as policy advisors and contribute to ongoing discussions on algorithmic fairness, implicit gender bias, and the need for gender-aware AI governance. They also raise broader questions about how AI tools understand and represent gender, and how these representations influence the politics of policy-making.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.687 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.879 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.498 Zit.
Fairness through awareness
2012 · 3.299 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.184 Zit.