Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Rethinking fairness in unsupervised healthcare AI: A methodological scoping review
0
Zitationen
3
Autoren
2026
Jahr
Abstract
OBJECTIVE: Fairness in machine learning has been extensively studied in supervised settings, where labeled outcomes allow direct assessment of bias. In contrast, fairness in unsupervised learning-particularly in healthcare remains insufficiently examined. In the absence of labels, it is unclear how fairness should be defined or evaluated for discovered structures such as patient subgroups, disease subtypes, or trajectories, despite their growing influence on clinical understanding and decision-making. This review aims to systematically examine how fairness is conceptualized, operationalized, and evaluated in unsupervised healthcare AI. METHODS: We conducted a PRISMA-guided methodological scoping review of the literature on fairness in unsupervised learning applied to healthcare data. The review focused on identifying algorithmic mechanisms, evaluation strategies, and methodological assumptions rather than comparing predictive performance or quantitatively synthesizing results across the reviewed literature. Records were analyzed with respect to data modalities, unsupervised learning techniques, and the underlying definitions of fairness they employed. RESULTS: The review reveals rapid growth in interest in fairness-aware unsupervised healthcare AI, accompanied by substantial heterogeneity and conceptual inconsistency. Fairness is addressed across diverse data types and methodological approaches, often without explicit alignment to clinical or ethical objectives. To structure this fragmented landscape, we propose a taxonomy of fairness approaches organized into five families: Individual Fairness, Performance Dependence, Welfare-Anchored approaches, Statistical Inference-based approaches, and Representation Parity. Each family embodies a distinct conception of equity and entails specific ethical and methodological trade-offs. We further identify recurring challenges, including defining fairness without labeled outcomes, limited incorporation of clinical expertise, and weak alignment between fairness objectives and medical validity. CONCLUSION: Fairness in unsupervised healthcare AI is an emerging but conceptually unsettled field. Current approaches reflect diverse and sometimes incompatible notions of equity, underscoring the need for clearer theoretical grounding. Progress will require explicit articulation of fairness goals, stronger integration of domain expertise and participatory evaluation, and closer alignment between algorithmic fairness criteria and clinically meaningful structures. This review provides a conceptual and methodological foundation to support more rigorous and transparent development of fair unsupervised healthcare AI systems.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.551 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.443 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.942 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.792 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Autoren
Institutionen
- Université de Versailles Saint-Quentin-en-Yvelines(FR)
- Université Paris-Saclay(FR)
- Assistance Publique – Hôpitaux de Paris(FR)
- Hôpital Raymond-Poincaré(FR)
- Centre National de la Recherche Scientifique(FR)
- Inserm(FR)
- Université de Lille(FR)
- Centre Hospitalier Universitaire de Lille(FR)
- École Centrale de Lille(FR)