OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.03.2026, 01:24

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Algorithmic Bias in AI-Based Diabetes Care: Systematic Review of Model Performance, Equity Reporting, and Physiological Label Bias

2025·2 Zitationen·InfoScience TrendsOpen Access
Volltext beim Verlag öffnen

2

Zitationen

2

Autoren

2025

Jahr

Abstract

Artificial intelligence (AI) and machine learning (ML) have revolutionized diabetes management through glucose prediction and decision-support systems. However, concerns persist about algorithmic bias and demographic disparities in these technologies, particularly across racial, ethnic, and socioeconomic subgroups. This systematic review evaluates the equity of AI-based glucose prediction models, focusing on performance disparities, fairness reporting, and physiological label biases. We conducted a systematic review following the PRISMA 2020 guidelines (Preferred Reporting Items for Systematic Reviews and Meta-Analyses). Searches were performed in PubMed, Scopus, and Google Scholar using keywords related to AI, diabetes management, and health disparities. Inclusion criteria encompassed studies examining AI/ML models for glucose prediction or nutrition recommendations in diabetes, with a focus on racial, ethnic, or socioeconomic disparities. Data extraction and quality assessment were performed independently by two reviewers. Among 1,243 initially identified articles, only 10 met inclusion criteria. The review revealed limited evidence of subgroup performance disparities, with just one study explicitly evaluating racial differences in AI model performance. Fairness reporting was rare, with only 7% of AI diabetes studies documenting ethnoracial data and virtually none conducting fairness audits. Physiological and label biases, such as HbA1c discrepancies between racial groups, were documented but unaddressed in AI model development. AI-based diabetes technologies lack robust equity evaluation, with minimal reporting on subgroup performance and fairness. Without systematic bias mitigation and equitable design, these tools risk exacerbating existing health disparities. Future research must prioritize transparency, representativeness, and fairness to ensure equitable benefits for all populations.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationClinical practice guidelines implementationHealthcare cost, quality, practices
Volltext beim Verlag öffnen