OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 19.03.2026, 17:26

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

The Measurement and Mitigation of Algorithmic Bias and Unfairness in Healthcare AI Models Developed for the CMS AI Health Outcomes Challenge

2022·6 ZitationenOpen Access
Volltext beim Verlag öffnen

6

Zitationen

3

Autoren

2022

Jahr

Abstract

Abstract Algorithms play an increasingly prevalent role in healthcare, and are used to target interventions, reward performance, and distribute resources, including funding. Yet it is widely recognized that many algorithms used today may inadvertently encode and perpetuate biases and contribute to health inequities. Artificial intelligence algorithms, in addition to being assessed for accuracy, must be evaluated with respect to whether they could impact disparities in health outcomes. This paper presents details and results of ClosedLoop’s methods to measure and mitigate bias in machine learning models that were the winning submission in the CMS AI Health Outcomes Challenge. The submission applied a comprehensive framework for assessing algorithmic bias and fairness and the development and application of a metric appropriate for real-world healthcare settings capable of being used to assess and reduce the presence and impact of unfairness. The submission demonstrated precision and transparency in the comprehensive measurement of algorithmic bias from multiple sources, including data representativeness, subgroup validity, label choice, and feature bias. For feature bias, the submission made a detailed examination of feature selection and diversity, including evaluating the appropriateness of including race in algorithm development. It also demonstrated how fairness criteria could be used to adjust care management enrollment thresholds to mitigate unfairness. Computational methods and measures exist that allow healthcare organizations to measure and mitigate algorithmic bias and fairness in models used in practical healthcare settings. It is possible for healthcare organizations to adopt policies and practices that enable them to design, implement, and maintain algorithms that are highly accurate, unbiased, and fair. Author summary AI has come of age through the alchemy of cheap parallel (cloud) computing combined with the availability of big data and better algorithms. Problems that seemed unconquerable a few years ago are being solved, at times with startling gains. AI has finally arrived in health care, where the stakes are high, and the complexity and criticality of issues can far outweigh other applications. AI’s arrival is good; organizations are confronting forces strong enough that they may only yield once AI is brought to bear. AI has started to play a central role in targeting care interventions, rewarding physician performance, and distributing resources, including funding. Here’s the problem: If health care’s algorithms are biased — something that researchers at the Center for Applied Artificial Intelligence at the University of Chicago’s Booth School of Business have concluded — then AI solutions designed to drive better outcomes can make things worse. The good news is that these experts also said that algorithmic bias, while pervasive, is not inevitable. The key is to define the processes and tools that can help measure and address it. The work presented in this paper represent an important contribution to these tools and a real-world demonstration of results.

Ähnliche Arbeiten

Autoren

Themen

Health Systems, Economic Evaluations, Quality of LifeHealthcare cost, quality, practicesArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen