Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Fairness Auditing of Tabular Machine Learning via AIME: Exposing Dataset Bias with Groupwise Δ‑Importance
0
Zitationen
3
Autoren
2025
Jahr
Abstract
This study proposes a practical workflow for fairness auditing in tabular machine learning (Tabular ML) by combining group-wise global importance differences (Δ-importance) using Approximate Inverse Model Explanations (AIME) with concise fairness metrics (Demographic Parity [DP: selection rate and 4/5 rule] and Equalized Odds [EO: TPR/FPR difference]). Using the Adult (UCI Census Income) dataset, we trained a LightGBM model with sensitive attributes (gender and race) excluded from the training features and evaluated it at a threshold of 0.5. The overall performance was 0.875, the ROC-AUC was 0.929, and the selection rate was 0.200. From a fairness perspective, the gender selection rates were male (0.257) and female (0.084), with a four-fifths ratio of 0.328, and the EO had a TPR difference of 0.072 and an FPR difference of 0.058. In terms of race, the maximum selection rate was 0.238 (Asian-Pacific Islander) and the minimum was 0.073 (Other), with a four-fifths ratio of 0.308 and EO of TPR difference of 0.161/FPR difference of 0.088, indicating a significant deviation between the groups. By calculating the overall and group-specific importance using AIME in a consistent procedure and extracting Δ-importance, we can specifically identify which features contribute relatively differently across groups, enabling explanation-guided mitigation based on explanations, such as preprocessing, feature design, threshold adjustment, calibration, and constrained learning. The proposed workflow is not specific to AIME and can be replaced or used in combination with other XAI metrics, such as SHAP, Permutation Importance, and SAGE. Limitations include dependence on operational thresholds and group sizes, estimation uncertainty, and the impossibility of reconciling fairness concepts. Nevertheless, this method is useful for visualizing data-driven biases and explicitly revealing their "breakdown," demonstrating its practical applicability as a fairness audit template for Tabular ML.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.566 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.865 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.411 Zit.
Fairness through awareness
2012 · 3.276 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.183 Zit.