OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 23.03.2026, 14:58

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Human Feasibility Constrained AI: A Conceptual Framework for Next-Generation Human-AI Systems

2026·0 Zitationen·Zenodo (CERN European Organization for Nuclear Research)Open Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2026

Jahr

Abstract

This paper introduces human feasibility constrained AI as a way to explain why increasingly powerful machine learning systems still fail when deployed in real-world social and institutional contexts. Rather than attributing deployment failures solely to model performance or data limitations, it argues that some constraints in human–AI systems are fundamentally non-learnable: they arise from responsibility, legitimacy, ethics, and institutional governance, and therefore cannot be optimized away by scaling models or improving accuracy. The paper distinguishes between learnable functions (e.g., prediction, pattern recognition, optimization under fixed objectives) and non-learnable human responsibilities (e.g., defining boundaries, interpreting outputs in context, assuming accountability, and governing failures). It then sketches a simple formal decomposition in which human feasibility constraints restrict the feasible policy space and structure escalation, override, and ex post review, independently of what can be learned from data. Building on case studies from education, criminal justice, and AI-enabled clinical decision support, the paper proposes a four-level model of human–AI collaboration: Level 0 (tool usage), Level 1 (verification), Level 2 (boundary definition and task allocation), and Level 3 (accountability and failure governance). It argues that higher levels remain structurally human even as AI capabilities grow, and that system design, evaluation, and governance should be organized around these non-delegable responsibilities rather than treating human involvement as residual “friction.” The framework is intended as a conceptual and theoretical foundation for future work on AI system design, human–AI collaboration, and AI governance in domains such as health care, public services, and infrastructure management.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AIArtificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)
Volltext beim Verlag öffnen