OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.03.2026, 01:24

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Quantifying Bias in Text Genrative AI models

2025·0 Zitationen·Zenodo (CERN European Organization for Nuclear Research)Open Access
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2025

Jahr

Abstract

Generative artificial intelligence (AI), especially large language models (LLMs), is increasingly deployed in domains such as recruitment, content creation, and education. While these systems accelerate productivity, they also risk reproducing and amplifying societal biases (Ahuchogu et al., 2025). This project addresses the urgent challenge of identifying, quantifying, and mitigating gender bias in text-generative AI outputs, with a focus on job narratives. Building on my independent study of 11,000+ AI-generated job narratives, which we generated using Gemini AI, we introduce a bias quantification framework using mean bias, mean absolute bias, sentiment skew (via TextBlob), and distributional measures (via Kullback–Leibler divergence and related distances). Preliminary results show measurable gendered patterns across generated narratives, validating the hypothesis of proposed gender bias in LLM. The proposed work extends this foundation in three directions: expanding bias quantification using probabilistic distribution distances (Devisetti, 2024)(Chung et al., 1989), evaluating prompt-construction bias and multi-model comparisons across GPT-3, GPT-4, Gemini, and open-source LLMs (Blodgett et al., 2020), and integrating interpretable embedding methods (e.g., SPINE)(Subramanian et al., 2017) for transparency in downstream debiasing. The expected contribution is both theoretical and practical: a robust bias quantification pipeline grounded in probability theory, and actionable strategies to mitigate bias in LLM-generated recruitment texts(Ferrara, 2024). Beyond recruitment, the proposed methodology aims to serve as a standard for bias evaluation in generative AI applications more broadly. A key part of this research is the creation of large datasets containing job narratives. These datasets not only help analyze bias in AI-generated content but also support other Natural Language Processing (NLP) tasks.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AIArtificial Intelligence in Healthcare and EducationTopic Modeling
Volltext beim Verlag öffnen