Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Towards Fair AI Systems: An Insurance Case Study to Identify and Mitigate Discrimination
0
Zitationen
2
Autoren
2025
Jahr
Abstract
We investigate potential gender-based discrimination in a real-world insurance machine learning model designed to identify claims likely to “explode” in compensation costs. With the EU AI Act and Austrian legal frameworks requiring non-discriminatory algorithmic systems, ensuring fairness in insurance claim prediction models has become critically important. The research examines whether a Light Gradient Boosting Machine (LGBM) model used by an Austrian insurance company exhibits gender discriminatory behavior and explores methods to mitigate such bias. This study analyzed a dataset of 450,000 insurance claims provided by an Austrian insurance company. The baseline analysis revealed significant discrimination against female claimants compared to male claimants. While mitigation methods successfully improved fairness metrics, these improvements came at a cost to predictive performance.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.640 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.878 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.465 Zit.
Fairness through awareness
2012 · 3.295 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.184 Zit.