OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 08.05.2026, 16:00

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

DESIGN AND EVALUATION OF A LIFECYCLE-BASED FRAMEWORK FOR MITIGATING BIAS IN ORGANIZATIONAL AI SYSTEMS

2026·0 Zitationen·Zenodo (CERN European Organization for Nuclear Research)Open Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2026

Jahr

Abstract

About the author Ahmed Ibrahim, D.tech, CISSP, PMP, PMF, MSIT, SSBB, CEH Master, CCP, ECSA, AI PRACTICAL AI CEH ahmedibrahim@elevationtechnology.org https://ahmedmohamedibrahim.com/ https://www.linkedin.com/in/ahmedibrahimno1/ Ahmed Mohamed Ibrahim brings more than 20 years of experience designing and deploying AI and IT solutions across federal and civilian organizations. His core expertise spans AI governance and compliance, cybersecurity and secure platform architecture, machine learning and data architecture, and adversarial AI testing, including LLM security. This professional background is what drove him to this research. Having observed firsthand what happens when AI systems deploy without adequate governance infrastructure, Ibrahim designed and evaluated the Lifecycle-Based Organizational AI Bias Mitigation Framework (LOABMF) to address the governance gap that organizations consistently face but rarely have the tools to bridge. Abstract Design and Evaluation of a Lifecycle-Based Framework for Mitigating Bias in Organizational AI Systems By Ahmed Mohamed Ibrahim D.tech Candidate CISSP, PMP, PMF, MSIT, SSBB, CEH Master, CCP, ECSA, PRACTICAL AI CEH Claremont Graduate University: 2026 Organizations that deploy artificial intelligence systems for high-stakes decisions in hiring, lending, healthcare, and risk assessment face a critical and unresolved challenge: no integrated framework exists to operationalize bias mitigation across the full AI lifecycle (Barocas et al., 2023). Technical fairness research has produced debiasing tools and metrics, but assumes a level of centralized technical authority that most organizations do not possess (Veale et al., 2018). Regulatory frameworks, including the European Union AI Act (Regulation 2024/1689) and U.S. anti-discrimination statutes, define compliance obligations but offer no implementation pathway calibrated to real organizational governance structures (EU AI Act, 2024). Organizational research documents why governance fails in practice, but has not produced a generalizable, tested artifact that practitioners can adopt (Rakova et al., 2021). This dissertation introduces the Lifecycle-Based Organizational AI Bias Mitigation Framework (LOABMF), a seven-stage, integrated construct that simultaneously bridges the technical, regulatory, and organizational dimensions of AI bias mitigation. The framework assigns named accountability roles at each stage, embeds regulatory compliance checkpoints mapped to EU and US requirements, and structures three mechanisms to address the primary barriers to effective governance: role ambiguity, siloed decision-making, and organizational short-termism. Using a design science research approach (Hevner et al., 2004), the study validates the framework through scenario-based design logic drawn from existing organizational theory and regulatory text, supplemented by expert validation interviews with practitioners from government, nonprofit, and education sectors. The research demonstrates that LOABMF's structural mechanisms effectively respond to documented failure modes where traditional governance approaches have failed. The framework produces stage-specific accountability artifacts, specifically a Data Card, Model Card, Validation Report, and Governance Log, that translate abstract governance requirements into concrete practitioner deliverables. This research contributes to the field of information systems by providing a theoretically grounded yet practically applicable artifact that enables organizations to move from reactive bias management to proactive, lifecycle-based mitigation. It offers a standardized pathway for compliance with emerging regulations while addressing the sociotechnical complexities of organizational AI deployment. Keywords: AI bias mitigation, algorithmic fairness, AI governance, organizational AI, responsible AI, EU AI Act, NIST AI RMF, design science research, lifecycle management, fairness, accountability, transparency, sociotechnical systems, disparate impact, regulatory compliance.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AIExplainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen