OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 21.04.2026, 08:37

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Understanding and Mitigating Unintended Bias in Medical AI Systems

2026·0 Zitationen·Harvard Data Science ReviewOpen Access
Volltext beim Verlag öffnen

0

Zitationen

10

Autoren

2026

Jahr

Abstract

Complex AI technologies, including generative AI models, are being increasingly introduced in clinical settings without corresponding tools or methods to systematically evaluate them for the unintended biases they may exhibit. Current literature extensively documents the risks of explicit and implicit biases by healthcare providers, but actionable guidance for identifying and mitigating biases in the AI systems used by providers remains lacking. Tackling unintended bias in medical AI is challenging but vital for building transparency and trust in healthcare systems increasingly shaped by these technologies.To address this absence of actionable guidance, we introduce the Unintended Bias Risk Matrix (UBRM), a tool that helps identify risks of unintended bias in AI systems. The UBRM identifies key risk factors, grounded in prior literature and our empirical experience testing AI systems for bias. The UBRM guides developers and deployers through deriving a use-case- and context-specific unintended bias risk assessment for the AI system. This assessment serves to inform organizations, based on their risk appetite and AI governance policies, which AI systems need to be carefully monitored and tested for unintended biases. For illustration, we apply the UBRM to two common medical AI use cases: one traditional machine learning application and one large language model (LLM) application. These use cases demonstrate the practical application of the UBRM in addressing and mitigating unintended bias in medical AI systems.

Ähnliche Arbeiten