Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Navigating Fairness in AI-based Prediction Models: Theoretical Constructs and Practical Applications
2
Zitationen
6
Autoren
2025
Jahr
Abstract
Abstract Artificial Intelligence (AI)-based prediction models, including risk scoring systems and decision support systems, are increasingly adopted in healthcare. Addressing AI fairness is essential to fighting health disparities and achieving equitable performance and patient outcomes. Numerous and conflicting definitions of fairness complicate this effort. This paper aims to structure the transition of AI fairness from theory to practical application with appropriate fairness metrics. For 27 definitions of fairness identified in the recent literature, we assess the relation with the model’s intended use, type of decision influenced and ethical principles of distributive justice. We advocate that due to limitations in some notions of fairness, clinical utility, performance-based metrics (area under the receiver operating characteristic curve), calibration, and statistical parity are the most relevant group-based metrics for medical applications. Through two use cases, we demonstrate that different metrics may be applicable depending on the intended use and ethical framework. Our approach provides a foundation for AI developers and assessors by assessing model fairness and the impact of bias mitigation strategies, hence promoting more equitable AI-based implementations.
Ähnliche Arbeiten
Meta-analysis in clinical trials
1986 · 38.745 Zit.
Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement
2009 · 37.543 Zit.
PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation
2018 · 37.131 Zit.
The Cochrane Collaboration's tool for assessing risk of bias in randomised trials
2011 · 33.471 Zit.
RoB 2: a revised tool for assessing risk of bias in randomised trials
2019 · 28.346 Zit.