OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 04.04.2026, 06:45

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Bias and Fairness in AI Models: How can Bias in AI Models be Identified, Mitigated, and Prevented in Data Science Practices?

2024·1 Zitationen·International Journal of Innovative Science and Research Technology (IJISRT)Open Access
Volltext beim Verlag öffnen

1

Zitationen

4

Autoren

2024

Jahr

Abstract

Artificial intelligence (AI) and machine learning (ML) systems are progressively used in different areas, going with basic choices that influence individuals' lives. In any case, these frameworks can sustain and try and fuel existing social predispositions, prompting uncalled for results. This paper looks at the wellsprings of predisposition in simulated intelligence models, assesses current methods for distinguishing and relieving inclination, and proposes an extensive structure for creating more pleasant simulated intelligence frameworks. By coordinating specialized, moral, and functional points of view, this exploration plans to add to a more evenhanded utilization of computer-based intelligence across various areas, guaranteeing that artificial intelligence driven choices are fair, straightforward, and socially dependable.

Ähnliche Arbeiten

Autoren

Themen

Ethics and Social Impacts of AIExplainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen