Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A Comparative Study of Fairness in Medical Machine Learning
6
Zitationen
6
Autoren
2023
Jahr
Abstract
Although the applications of machine learning (ML) are revolutionizing medicine, current algorithms are not resilient against bias. Fairness in ML can be defined as measuring the potential bias in algorithms with respect to characteristics such as race, gender, and age. In this paper, we perform a comparative study to detect the bias caused by imbalanced group representation in medical datasets. We investigate bias in medical imaging tasks for the following dataset: chest X-ray dataset (CXR lung segmentation) and Stanford Diverse Dermatology Image (DDI) dataset (skin cancer prediction). Our results show differences in the performance of the state-of-the-arts across different groups. To mitigate this performance disparity, we explored different bias mitigation approaches and demonstrated that integrating these approaches into ML models can improve fairness without degrading the overall performance.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.100 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.466 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.