Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Augmented Science Journalist: A Human-in-the-Loop Framework for AI Integration
0
Zitationen
3
Autoren
2026
Jahr
Abstract
Algorithmic bias in healthcare systems has emerged as a critical threat to equitable patient care, with growing evidence that machine learning models perpetuate racial and ethnic disparities in clinical decision-making. This study aimed to investigate the extent, evolution, and real-world consequences of bias in healthcare algorithms through an innovative human-in-the-loop (HITL) investigative journalism framework. The methodology integrated AI-driven discovery, automated code repository auditing, and in-depth human investigation across three phases. AI tools analyzed temporal bias trends from 2015–2023, audited over 50 public GitHub repositories, and quantified disparities, while human journalists conducted expert interviews, impact assessments, and narrative synthesis to ensure contextual accuracy and ethical framing. Key findings revealed persistent and severe biases: Black and Native American patients experienced 2–3 times higher bias scores than White patients, with diagnostic and risk-prediction algorithms showing the greatest disparities. Only 33% of analyzed repositories included explicit bias testing, despite high adoption rates. Consequential impacts included false negative rates up to 73.7% for Black patients needing care, elevated treatment disparities, poorer health outcomes, and substantial economic costs from excess hospitalizations. The novelty lies in the scalable HITL synergy that enabled longitudinal, multi-source analysis previously infeasible manually, translating technical artifacts into actionable public knowledge. In conclusion, unchecked algorithmic bias systematically harms marginalized communities. We recommend mandatory bias audits, regulatory oversight of proprietary systems, and participatory governance involving affected patients.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.250 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.109 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.482 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.434 Zit.