Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Unmasking Bias in Financial AI: A Robust Framework for Evaluating and Mitigating Hidden Biases in LLMs
1
Zitationen
5
Autoren
2025
Jahr
Abstract
Large Language Models (LLMs) are increasingly used in finance for tasks like market analysis, customer support, sentiment analysis, and automated reporting. However, LLMs often inherit and perpetuate biases from their training data, raising concerns about fairness and accuracy in high-stakes financial applications. While other domains such as medicine, law, and education have advanced in identifying, measuring, and reducing bias, finance lacks domain-specific datasets and robust fairness metrics. To address this, we introduce the FinBias dataset which includes bias-eliciting prompts related to the finance domain, and a comprehensive evaluation framework for publicly available LLMs, including robustness tests against jailbreaking. We also propose a new metric, SAFE (Safety-Adjusted Fairness Evaluation), which penalizes stereotypical and refusal responses while rewarding debiased outputs. Additionally, we present a prompt engineering-based mitigation strategy that effectively reduces bias. Experiments conducted on three publicly available LLMs - Mixtral, Gemma, and LLaMA demonstrate that these models exhibit significant bias, but the proposed prompt engineering-based mitigation strategy effectively reduces this bias. This research provides a practical foundation for the detection, evaluation and mitigation of bias in financial LLM applications.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.373 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.244 Zit.
"Why Should I Trust You?"
2016 · 14.259 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.125 Zit.