Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Raising awareness of potential biases in medical machine learning: Experience from a Datathon
3
Zitationen
23
Autoren
2024
Jahr
Abstract
Abstract Objective To challenge clinicians and informaticians to learn about potential sources of bias in medical machine learning models through investigation of data and predictions from an open-source severity of illness score. Methods Over a two-day period (total elapsed time approximately 28 hours), we conducted a datathon that challenged interdisciplinary teams to investigate potential sources of bias in the Global Open Source Severity of Illness Score. Teams were invited to develop hypotheses, to use tools of their choosing to identify potential sources of bias, and to provide a final report. Results Five teams participated, three of which included both informaticians and clinicians. Most (4/5) used Python for analyses, the remaining team used R. Common analysis themes included relationship of the GOSSIS-1 prediction score with demographics and care related variables; relationships between demographics and outcomes; calibration and factors related to the context of care; and the impact of missingness. Representativeness of the population, differences in calibration and model performance among groups, and differences in performance across hospital settings were identified as possible sources of bias. Discussion Datathons are a promising approach for challenging developers and users to explore questions relating to unrecognized biases in medical machine learning algorithms. Author summary Disadvantaged groups are at risk of being adversely impacted by biased medical machine learning models. To avoid these undesirable outcomes, developers and users must understand the challenges involved in identifying potential biases. We conducted a datathon aimed at challenging a diverse group of participants to explore an open-source patient severity model for potential biases. Five groups of clinicians and informaticians used tools of their choosing to evaluate possible sources of biases, applying a range of analytic techniques and exploring multiple features. By engaging diverse participants with hands-on data experience with meaningful data, datathons have the potential to raise awareness of potential biases and promote best practices in developing fair and equitable medical machine learning models.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.214 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.071 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.429 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.418 Zit.
Autoren
- Harry Hochheiser
- Jesse Klug
- Thomas Mathie
- Tom Pollard
- Jesse D. Raffa
- Stephanie L. Ballard
- Evamarie A. Conrad
- Smitha Edakalavan
- Allan M. Joseph
- Nader Alnomasy
- Sarah Nutman
- V. Hill
- Sumit Kumar Kapoor
- Eddie Pérez Claudio
- Ольга В’ячеславівна Кравченко
- Ruoting Li
- Mehdi Nourelahi
- J. B. Díaz
- Warren Taylor
- Sydney Rooney
- Maeve Woeltje
- Leo Anthony Celi
- Christopher M. Horvat
Institutionen
- University of Pittsburgh(US)
- UPMC Health System(US)
- Massachusetts Institute of Technology(US)
- Cincinnati Children's Hospital Medical Center(US)
- University of Cincinnati Medical Center(US)
- University of Ha'il(SA)
- Health Information Management(BE)
- Children's Hospital of Pittsburgh(US)
- Hadassah Medical Center(IL)
- Beth Israel Deaconess Medical Center(US)
- Harvard University(US)