Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Humans as Mitigators of Biases in Risk Prediction via Field Studies
1
Zitationen
3
Autoren
2022
Jahr
Abstract
Machine learning algorithms have been used for predicting different risks – financial, medical, and legal – and have been argued to perform more efficiently than human experts. However, this exclusive focus on accuracy can be at the cost of the algorithms discriminating against people due to their age, gender, or race, since accuracy could work in opposition to equity. The challenge is that equity and fairness are innately human values that evolve as societies evolve, making it hard to represent them mathematically. Therefore, we propose a framework for including less biased human experts in the algorithm’s prediction loop to improve equity and maintain accuracy. In two field studies, one in the legal domain and the other in credit risk, we utilize publicly available datasets to obtain baseline measures of fairness. Subsequently, we obtain human input, which are used to debias the algorithm. Utilizing less biased human experts, as well as providing transparent and explainable predictions, will help increase legal compliance and the trust of various stakeholders in an organization.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.582 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.868 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.417 Zit.
Fairness through awareness
2012 · 3.279 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.183 Zit.