Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
When Your Only Tool Is A Hammer
12
Zitationen
4
Autoren
2020
Jahr
Abstract
It is no longer a hypothetical worry that artificial intelligence - more specifically, machine learning (ML) - can propagate the effects of pernicious bias in healthcare. To address these problems, some have proposed the development of 'algorithmic fairness' solutions. The primary goal of these solutions is to constrain the effect of pernicious bias with respect to a given outcome of interest as a function of one's protected identity (i.e., characteristics generally protected by civil or human rights legislation. The technical limitations of these solutions have been well-characterized. Ethically, the problematic implication - of developers, potentially, and end users - is that by virtue of algorithmic fairness solutions a model can be rendered 'objective' (i.e., free from the influence of pernicious bias). The ostensible neutrality of these solutions may unintentionally prompt new consequences for vulnerable groups by obscuring downstream problems due to the persistence of real-world bias.
Ähnliche Arbeiten
The Cochrane Collaboration's tool for assessing risk of bias in randomised trials
2011 · 33.435 Zit.
Global burden of 369 diseases and injuries in 204 countries and territories, 1990–2019: a systematic analysis for the Global Burden of Disease Study 2019
2020 · 18.328 Zit.
To Err Is Human
2000 · 14.066 Zit.
Strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies
2007 · 9.404 Zit.
KDIGO 2024 Clinical Practice Guideline for the Evaluation and Management of Chronic Kidney Disease
2024 · 6.656 Zit.