OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 11.04.2026, 20:18

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Biasbarrier a Fairness and Equity Filter for LLM Responses Under Algorithmic Accountability Acts

2025·0 Zitationen·International Journal of Scientific Research and Modern Technology.Open Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2025

Jahr

Abstract

The rapid adoption of large language models (LLMs) in decision-support and public-facing applications has intensified concerns regarding systemic bias, discriminatory outputs, and opaque reasoning pathways. Legislative frameworks such as emerging Algorithmic Accountability Acts demand not only explainability but also demonstrable fairness across diverse demographic, cultural, and linguistic contexts. This study introduces BiasBarrier, a fairness-driven response filtration framework that operates as an adaptive intermediary between LLM output generation and end-user delivery. The system integrates bias detection heuristics, equity-weighted semantic evaluation, and contextual re-balancing strategies to mitigate harmful stereotypes and unequal treatment patterns without compromising the model’s original intent or factual accuracy. By employing a dual-layer architecture—comprising pre-delivery auditing and post-delivery impact assessment—BiasBarrier ensures compliance with algorithmic accountability mandates while maintaining conversational fluidity. Experimental evaluations across multiple benchmark fairness datasets and multilingual prompts demonstrate measurable reductions in disparate treatment rates and implicit bias indicators. The results position BiasBarrier as a pragmatic and policy-aligned safeguard, bridging the technical gap between high-capacity generative AI systems and the ethical imperatives shaping their governance.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationEthics and Social Impacts of AIPrivacy-Preserving Technologies in Data
Volltext beim Verlag öffnen