Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Replication Data for: More Human than Human: Measuring ChatGPT Political Bias
1
Zitationen
3
Autoren
2023
Jahr
Abstract
A standing issue is how to measure bias in Large Language Models (LLMs) like ChatGPT. We devise a novel method of sampling, bootstrapping, and impersonation that addresses concerns about the inherent randomness of LLMs and test if it can capture political bias in ChatGPT. Our results indicate that, by default, ChatGPT is aligned with Democrats in the US. Placebo tests indicate that our results are due to bias, not noise or spurious relationships. Robustness tests show that our findings are valid also for Brazil and the UK, different professions, and different numerical scales and questionnaires.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.536 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.859 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.392 Zit.
Fairness through awareness
2012 · 3.270 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.183 Zit.