Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Towards a standard for identifying and managing bias in artificial intelligence
473
Zitationen
6
Autoren
2022
Jahr
Abstract
As individuals and communities interact in and with an environment that is increasingly virtual they are often vulnerable to the commodification of their digital exhaust. Concepts and behavior that are ambiguous in nature are captured in this environment, quantified, and used to categorize, sort, recommend, or make decisions about people's lives. While many organizations seek to utilize this information in a responsible manner, biases remain endemic across technology processes and can lead to harmful impacts regardless of intent. These harmful outcomes, even if inadvertent, create significant challenges for cultivating public trust in artificial intelligence (AI). SP 1270 is a NIST Artificial Intelligence publication and should be read in conjunction with all publications in the NIST AI Series, which was established in January 2023.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.502 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.855 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.376 Zit.
Fairness through awareness
2012 · 3.266 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.182 Zit.