OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 06.04.2026, 07:21

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Explainable Fairness-Attentive ML and DL(Fair-ExplainHR):Ethical and Transparent Attrition Prediction with Engagement, Economic, and Behavioral Signals

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

6

Autoren

2025

Jahr

Abstract

With a data-driven approach to workforce management, forecasting employee attrition is crucial for reducing organizational disruption and maximizing human capital strategy. Fair-ExplainHR is an improved, explainable, and fairnessaware machine learning model for ethically transparent attrition prediction proposed in this work. Inspired by the shortcomings of current models-largely their lack of attention to interpretability, fairness, and human-focused cues-this effort incorporates new behavioral (burnout), engagement, and economic signals into the prediction pipeline. Based on the IBM HR Analytics dataset, the approach combines strong preprocessing with SMOTE for imbalance management, feature engineering, and sophisticated model tuning using Optuna. The prediction engine consists of a deep stack ensemble of GRU-based neural networks, XGBoost, and TabNet, with a meta-classifier coordinating decision-level fusion. In contrast to earlier hybrid models that were limited to $\mathbf{9 5} \boldsymbol{\%}$ accuracy, Fair-ExplainHR recorded a higher accuracy of $96 \%$, with heavy gains through deep learning integration and explainability. SHAP analysis also provides transparent model decision insights, tackling ethical AI issues.it also promotes responsible AI practices, serving as a benchmark for future attrition prediction systems within corporate HR analytics.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

AI and HR TechnologiesFinancial Distress and Bankruptcy PredictionArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen