OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 08.04.2026, 01:32

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Beyond Accuracy – Bias in Medical AI and Consequences for Patient Empowerment and Digital Equity: A Scoping Review (Preprint)

2026·0 ZitationenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2026

Jahr

Abstract

<sec> <title>BACKGROUND</title> Artificial intelligence (AI) is increasingly embedded in digital health applications and shapes how patients access information, experience care, and participate in health decisions. However, concerns are growing about whether medical AI systems are fair, transparent, accountable, and inclusive. Biases in patient-facing AI tools may exclude certain groups and undermine core dimensions of patient empowerment, such as autonomy, trust, and control. However, existing research rarely examines the direct impact of AI-related bias on patient empowerment, instead inferring effects from proxy measures such as access, system performance, or information quality. </sec> <sec> <title>OBJECTIVE</title> This scoping review aims to systematically map and synthesize scientific literature on bias in medical AI in health care, with a specific focus on its implications for patient empowerment and digital inclusion. The review integrates technical, social, and structural dimensions of bias to provide a conceptual overview of how these factors shape equitable AI implementation. </sec> <sec> <title>METHODS</title> This scoping review was conducted in accordance with the Joanna Briggs Institute methodology and reported following the PRISMA-ScR guidelines. PubMed/MEDLINE and EBSCOHost databases were searched in August 2025, with Google Scholar used for supplementary searches. Peer-reviewed studies published in English or German from 2015 onward were included if they addressed bias in medical AI applications in health care in relation to patient empowerment or the digital divide. Eligibility criteria were structured using the Population–Concept–Context framework, encompassing patient populations, technical and structural forms of bias, and AI applications across the patient journey. Study selection was performed according to predefined criteria, and findings were synthesized descriptively. </sec> <sec> <title>RESULTS</title> The search identified 497 records, of which 23 studies met the inclusion criteria. Most studies (22/23, 95%) reported at least one form of bias, and 18/23 (78%) addressed multiple bias categories (mean 2.9 categories per study). Social bias was most frequently described (21/23, 91%), followed by algorithmic or technical bias (16/23, 70%), structural bias (14/23, 61%), and design bias (12/23, 52%). Bias was predominantly conceptualized as a multidimensional sociotechnical phenomenon rather than a purely technical issue and was particularly concentrated in patient-facing AI applications. Although many studies proposed strategies such as transparency, participatory design, or inclusive data practices, these approaches were rarely implemented or empirically evaluated. Overall, the findings reveal a gap between the recognition of bias in medical AI and the operationalization of empowerment-oriented mitigation strategies. </sec> <sec> <title>CONCLUSIONS</title> This scoping review shows that bias in medical AI is widely recognized as a multidimensional sociotechnical issue, yet its implications for patient empowerment are rarely examined in a conceptually explicit or operationalized manner. While risks related to digital inequality are frequently acknowledged, empowerment-oriented mitigation strategies remain largely underdeveloped and unevaluated. Future research should integrate intersectional perspectives and systematically assess how design, data practices, and governance structures influence empowerment outcomes in patient-facing AI applications. </sec>

Ähnliche Arbeiten

Autoren

Themen

Artificial Intelligence in Healthcare and EducationDigital Mental Health InterventionsEthics and Social Impacts of AI
Volltext beim Verlag öffnen