OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 21.03.2026, 11:37

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Quantifying and Mitigating Bias in GPT-Based Language Models

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2025

Jahr

Abstract

Large Language Models (LLMs) such as OpenAI's GPT -4 have demonstrated remarkable capabilities in generating human-like text across diverse contexts. However, their widespread adoption has raised critical concerns around inherent biases present in the generated content, especially those related to gender, race, and profession. This paper investigates the presence and nature of bias in responses generated by GPT-4 through controlled prompt testing. We introduce a prompt pair generation methodology that varies only on identity-specific variables and assess the outputs of the model on semantic similarity, polarity of sentiment, and embedding-level discrepancy measures. We carry out experiments on 100+ prompt variations using Python and OpenAI's API and find discernible patterns for word associations and tone changes. In addition, the paper delves into lightweight mitigation techniques through rephrased prompts, system role setting, and output post-processing. Our work emphasizes that while GPT -4 shows diminished bias compared to its earlier ancestors, quantifiable differences persist in role-specific and stereotype-aware settings. Our contribution is an open, replicable bias-detection framework, alongside actionable feedback to researchers and practitioners seeking to leverage LLMs responsibly within sensitive fields like health, law, and education.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationTopic ModelingComputational and Text Analysis Methods
Volltext beim Verlag öffnen