OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 28.03.2026, 16:02

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Fairness Auditing of Tabular Machine Learning via AIME: Exposing Dataset Bias with Groupwise Δ‑Importance

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2025

Jahr

Abstract

This study proposes a practical workflow for fairness auditing in tabular machine learning (Tabular ML) by combining group-wise global importance differences (Δ-importance) using Approximate Inverse Model Explanations (AIME) with concise fairness metrics (Demographic Parity [DP: selection rate and 4/5 rule] and Equalized Odds [EO: TPR/FPR difference]). Using the Adult (UCI Census Income) dataset, we trained a LightGBM model with sensitive attributes (gender and race) excluded from the training features and evaluated it at a threshold of 0.5. The overall performance was 0.875, the ROC-AUC was 0.929, and the selection rate was 0.200. From a fairness perspective, the gender selection rates were male (0.257) and female (0.084), with a four-fifths ratio of 0.328, and the EO had a TPR difference of 0.072 and an FPR difference of 0.058. In terms of race, the maximum selection rate was 0.238 (Asian-Pacific Islander) and the minimum was 0.073 (Other), with a four-fifths ratio of 0.308 and EO of TPR difference of 0.161/FPR difference of 0.088, indicating a significant deviation between the groups. By calculating the overall and group-specific importance using AIME in a consistent procedure and extracting Δ-importance, we can specifically identify which features contribute relatively differently across groups, enabling explanation-guided mitigation based on explanations, such as preprocessing, feature design, threshold adjustment, calibration, and constrained learning. The proposed workflow is not specific to AIME and can be replaced or used in combination with other XAI metrics, such as SHAP, Permutation Importance, and SAGE. Limitations include dependence on operational thresholds and group sizes, estimation uncertainty, and the impossibility of reconciling fairness concepts. Nevertheless, this method is useful for visualizing data-driven biases and explicitly revealing their "breakdown," demonstrating its practical applicability as a fairness audit template for Tabular ML.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AIExplainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen