Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Generative AI Mitigates Representation Bias and Improves Model Fairness Through Synthetic Health Data
7
Zitationen
6
Autoren
2023
Jahr
Abstract
Abstract Representation bias in health data can lead to unfair decisions and compromise the generalisability of research findings. As a consequence, underrepresented subpopulations, such as those from specific ethnic backgrounds or genders, do not benefit equally from clinical discoveries. Several approaches have been developed to mitigate representation bias, ranging from simple resampling methods, such as SMOTE, to recent approaches based on generative adversarial networks (GAN). However, generating high-dimensional time-series synthetic health data remains a significant challenge. In response, we devised a novel architecture (CA-GAN) that synthesises authentic, high-dimensional time series data. CA-GAN outperforms state-of-the-art methods in a qualitative and a quantitative evaluation while avoiding mode collapse, a serious GAN failure. We perform evaluation using 7535 patients with hypotension and sepsis from two diverse, real-world clinical datasets. We show that synthetic data generated by our CA-GAN improves model fairness in Black patients as well as female patients when evaluated separately for each subpopulation. Furthermore, CA-GAN generates authentic data of the minority class while faithfully maintaining the original distribution of data, resulting in improved performance in a downstream predictive task. Author summary Doctors and other healthcare professionals are increasingly using Artificial Intelligence (AI) to make better decisions about patients’ diagnosis, suggest optimal treatments, and estimate patients’ future health risks. These AI systems learn from existing health data which might not accurately reflect the health of everyone, particularly people from certain racial or ethnic groups, genders, or those with lower incomes. This can mean the AI doesn’t work as well for these groups and could even make existing health disparities worse. To address this, we have developed a purposely built AI software that can create synthetic patient data. Synthetic data created by our software mimics real patient data without actually copying them, protecting patients’ privacy. Using our synthetic data results in more representative dataset of all groups, and ensures that AI algorithms learn to be fairer for all patients.
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.156 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.543 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Analysis of Survival Data.
1985 · 4.379 Zit.