Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Machine Learning Techniques for Accountability
19
Zitationen
2
Autoren
2021
Jahr
Abstract
Artificial intelligence systems have provided us with many everyday conveniences. We can easily search for information across millions of web‐pages via text and voice. Paperwork processing is increasingly automated. Artificial intelligence systems flag potentially fraudulent credit‐card transactions and filter our e‐mail. Yet these artificial intelligence systems have also experienced significant failings. Across a range of applications, including loan approvals, disease severity scores, hiring algorithms, and face recognition, artificial‐intelligence‐based scoring systems have exhibited gender and racial bias. Self‐driving cars have had serious accidents. As these systems become more prevalent, it is increasingly important that we identify the best ways to keep them accountable.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.246 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.228 Zit.
"Why Should I Trust You?"
2016 · 14.150 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.091 Zit.