Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Translation, Tracks & Data
27
Zitationen
5
Autoren
2019
Jahr
Abstract
Potential negative outcomes of machine learning and algorithmic bias have gained deserved attention. However, there are still relatively few standard processes to assess and address algorithmic biases in industry practice. Practical tools that integrate into engineers' workflows are needed. As a case study, we present two tooling efforts to create tools for teams in practice to address algorithmic bias. Both intend to increase understanding of data, models, and outcome measurement decisions. We describe the development of 1) a prototype checklist based on existing literature frameworks; and 2) dashboarding for quantitatively assessing outcomes at scale. We share both technical and organizational lessons learned on checklist perceptions, data challenges and interpretation pitfalls.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.511 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.858 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.382 Zit.
Fairness through awareness
2012 · 3.269 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.183 Zit.