Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Ein externer Link zum Volltext ist derzeit nicht verfügbar.
Human-Final Decision Authority in Artificial Intelligence: A Deterministic and Auditable Governance Architecture
0
Zitationen
1
Autoren
2025
Jahr
Abstract
Artificial intelligence systems are increasingly involved in high-impact domains such as finance, healthcare, defense, public policy, and critical infrastructure. While contemporary AI research has focused heavily on model accuracy, scalability, and autonomy, significantly less attention has been given to a more fundamental question: who holds decision authority when an AI system produces an outcome. The absence of explicit decision authority frameworks has led to growing concerns around accountability, liability, governance, and trust, particularly in regulated and safety-critical environments. This paper introduces a deterministic, governance-first framework for Decision Authority in AI systems, grounded in the principle of human-final intelligence. Rather than treating decision-making as an emergent property of autonomous models, the proposed approach formally separates inference from decision authority. AI systems are positioned strictly as decision-support mechanisms, while final authority, responsibility, and accountability remain explicitly assigned to human actors through verifiable governance constraints. The framework is built on four core pillars: (1) deterministic decision boundaries, (2) mandatory human-final authority enforcement, (3) audit-ready decision lineage and replay, and (4) uncertainty-aware hard-stop mechanisms. These elements ensure that every AI-assisted decision is reproducible, explainable, and legally attributable. Importantly, the framework is model-agnostic and can be applied across machine learning paradigms without modifying underlying algorithms, making it suitable for enterprise and regulatory adoption. By formalizing decision authority as an engineering and governance problem—rather than an ethical afterthought—this work provides a structured pathway toward accountable AI deployment. The proposed model directly addresses regulatory expectations emerging across the United States and international jurisdictions, where transparency, liability clarity, and human oversight are becoming non-negotiable requirements. The paper concludes that sustainable and trustworthy AI systems will not be defined by autonomy alone, but by their ability to operate within deterministic, auditable, and human-governed decision architectures.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.620 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.876 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.435 Zit.
Fairness through awareness
2012 · 3.293 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.184 Zit.