Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Improving fairness in machine learning-enabled affirmative actions: a case study in outreach activities in healthcare
1
Zitationen
3
Autoren
2024
Jahr
Abstract
Over the last decade, due to the growing availability of data and computational resources, machine learning (ML) approaches have started to play a key role in the implementation of affirmative-action policies and programs. The underlying assumption is that resource allocation can be informed by the prediction of individual risks, improving the prioritization of the potential beneficiaries, and increasing the performance of the system. Therefore, it is important to ensure that biases in the data or the algorithms do not lead to treating some individuals unfavourably. In particular, the notion of group-based fairness seeks to ensure that individuals will not be discriminated against on the basis of their group’s protected characteristics. This work proposes an optimization model to improve fairness in ML-enabled affirmative actions, following a post-processing approach. Our case study is an outreach program to increase cervical cancer screening among hard-to-reach women in Bogotá, Colombia. Bias may occur since the protected group (women in the most severe poverty) are under-represented in the data. Computational experiments show that it is possible to address ML bias while maintaining high levels of accuracy.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.482 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.853 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.362 Zit.
Fairness through awareness
2012 · 3.258 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.182 Zit.