OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 25.03.2026, 19:15

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Bias detection and mitigation in large language models: A fairness-driven approach

2026·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2026

Jahr

Abstract

Large language models (LLMs) have become frequent in numerous fields, which has drawn attention to the social, cultural, demographic biases they include. Using intense data enhancement, competitive de biasing, training with the fairness limitations and adaptive post-processing methods, this research proposes an entire, fairness-centred method for finding and eliminating biases in LLMs. Experimental, testing shows significant gains while maintaining task relevance and linguistic competence, with major reductions in Equalized Odds Difference and Statistical Parity Difference. Correspondingly, subjective assessment proves greater contextual sensitivity and less stereotype propagation. Furthermore, to concrete strategies for the ethical, accountable, and socially responsive utilization of language technology, these results highlight the importance of iterative bias verification and context-sensitive mitigation for the duration of life cycle of model.

Ähnliche Arbeiten

Autoren

Themen

Ethics and Social Impacts of AIComputational and Text Analysis MethodsArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen