OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 18.03.2026, 01:05

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

The Augmented Science Journalist: A Human-in-the-Loop Framework for AI Integration

2026·0 Zitationen·Zenodo (CERN European Organization for Nuclear Research)Open Access
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2026

Jahr

Abstract

Algorithmic bias in healthcare systems has emerged as a critical threat to equitable patient care, with growing evidence that machine learning models perpetuate racial and ethnic disparities in clinical decision-making. This study aimed to investigate the extent, evolution, and real-world consequences of bias in healthcare algorithms through an innovative human-in-the-loop (HITL) investigative journalism framework. The methodology integrated AI-driven discovery, automated code repository auditing, and in-depth human investigation across three phases. AI tools analyzed temporal bias trends from 2015–2023, audited over 50 public GitHub repositories, and quantified disparities, while human journalists conducted expert interviews, impact assessments, and narrative synthesis to ensure contextual accuracy and ethical framing. Key findings revealed persistent and severe biases: Black and Native American patients experienced 2–3 times higher bias scores than White patients, with diagnostic and risk-prediction algorithms showing the greatest disparities. Only 33% of analyzed repositories included explicit bias testing, despite high adoption rates. Consequential impacts included false negative rates up to 73.7% for Black patients needing care, elevated treatment disparities, poorer health outcomes, and substantial economic costs from excess hospitalizations. The novelty lies in the scalable HITL synergy that enabled longitudinal, multi-source analysis previously infeasible manually, translating technical artifacts into actionable public knowledge. In conclusion, unchecked algorithmic bias systematically harms marginalized communities. We recommend mandatory bias audits, regulatory oversight of proprietary systems, and participatory governance involving affected patients.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationEthics and Social Impacts of AIElectronic Health Records Systems
Volltext beim Verlag öffnen