Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Bias and Fairness in AI Models: How can Bias in AI Models be Identified, Mitigated, and Prevented in Data Science Practices?
1
Zitationen
4
Autoren
2024
Jahr
Abstract
Artificial intelligence (AI) and machine learning (ML) systems are progressively used in different areas, going with basic choices that influence individuals' lives. In any case, these frameworks can sustain and try and fuel existing social predispositions, prompting uncalled for results. This paper looks at the wellsprings of predisposition in simulated intelligence models, assesses current methods for distinguishing and relieving inclination, and proposes an extensive structure for creating more pleasant simulated intelligence frameworks. By coordinating specialized, moral, and functional points of view, this exploration plans to add to a more evenhanded utilization of computer-based intelligence across various areas, guaranteeing that artificial intelligence driven choices are fair, straightforward, and socially dependable.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.603 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.870 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.424 Zit.
Fairness through awareness
2012 · 3.282 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.183 Zit.