OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 14.04.2026, 01:29

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Towards Fair AI Systems: An Insurance Case Study to Identify and Mitigate Discrimination

2025·0 Zitationen·Lecture notes in computer scienceOpen Access
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2025

Jahr

Abstract

We investigate potential gender-based discrimination in a real-world insurance machine learning model designed to identify claims likely to “explode” in compensation costs. With the EU AI Act and Austrian legal frameworks requiring non-discriminatory algorithmic systems, ensuring fairness in insurance claim prediction models has become critically important. The research examines whether a Light Gradient Boosting Machine (LGBM) model used by an Austrian insurance company exhibits gender discriminatory behavior and explores methods to mitigate such bias. This study analyzed a dataset of 450,000 insurance claims provided by an Austrian insurance company. The baseline analysis revealed significant discrimination against female claimants compared to male claimants. While mitigation methods successfully improved fairness metrics, these improvements came at a cost to predictive performance.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AIExplainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen