Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Bias detection and mitigation in large language models: A fairness-driven approach
0
Zitationen
5
Autoren
2026
Jahr
Abstract
Large language models (LLMs) have become frequent in numerous fields, which has drawn attention to the social, cultural, demographic biases they include. Using intense data enhancement, competitive de biasing, training with the fairness limitations and adaptive post-processing methods, this research proposes an entire, fairness-centred method for finding and eliminating biases in LLMs. Experimental, testing shows significant gains while maintaining task relevance and linguistic competence, with major reductions in Equalized Odds Difference and Statistical Parity Difference. Correspondingly, subjective assessment proves greater contextual sensitivity and less stereotype propagation. Furthermore, to concrete strategies for the ethical, accountable, and socially responsive utilization of language technology, these results highlight the importance of iterative bias verification and context-sensitive mitigation for the duration of life cycle of model.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.553 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.859 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.402 Zit.
Fairness through awareness
2012 · 3.272 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.183 Zit.