OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 23.03.2026, 21:23

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Unmasking Bias in Financial AI: A Robust Framework for Evaluating and Mitigating Hidden Biases in LLMs

2025·1 Zitationen
Volltext beim Verlag öffnen

1

Zitationen

5

Autoren

2025

Jahr

Abstract

Large Language Models (LLMs) are increasingly used in finance for tasks like market analysis, customer support, sentiment analysis, and automated reporting. However, LLMs often inherit and perpetuate biases from their training data, raising concerns about fairness and accuracy in high-stakes financial applications. While other domains such as medicine, law, and education have advanced in identifying, measuring, and reducing bias, finance lacks domain-specific datasets and robust fairness metrics. To address this, we introduce the FinBias dataset which includes bias-eliciting prompts related to the finance domain, and a comprehensive evaluation framework for publicly available LLMs, including robustness tests against jailbreaking. We also propose a new metric, SAFE (Safety-Adjusted Fairness Evaluation), which penalizes stereotypical and refusal responses while rewarding debiased outputs. Additionally, we present a prompt engineering-based mitigation strategy that effectively reduces bias. Experiments conducted on three publicly available LLMs - Mixtral, Gemma, and LLaMA demonstrate that these models exhibit significant bias, but the proposed prompt engineering-based mitigation strategy effectively reduces this bias. This research provides a practical foundation for the detection, evaluation and mitigation of bias in financial LLM applications.

Ähnliche Arbeiten

Autoren

Themen

Explainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and EducationEthics and Social Impacts of AI
Volltext beim Verlag öffnen