OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 15.04.2026, 16:00

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Ein externer Link zum Volltext ist derzeit nicht verfügbar.

Institutional Incentives, Constraint Optimization, and the Misattribution of Intent

2026·0 Zitationen·Open MINDOpen Access

0

Zitationen

1

Autoren

2026

Jahr

Abstract

Abstract Public discourse surrounding artificial intelligence frequently attributes malice, intent, or emergent hostility to contemporary systems. This paper argues that such attributions constitute an ontological error. Artificial intelligence as it exists today—implemented through optimization architectures, machine learning systems, and objective-driven computational frameworks—does not originate intent. It converges toward the efficient fulfillment of governing objective functions within imposed constraints. Drawing upon historical parallels, institutional incentive analysis, and structural examination of constraint landscapes, this work demonstrates that harmful outcomes attributed to artificial intelligence are better understood as consequences of objective definition and incentive architecture. Optimization systems amplify encoded priorities; they do not create them. When undesirable behavior emerges, either the objective function encodes harmful priorities or the constraint structure fails to prevent harmful optimization pathways. The paper further addresses the “black box” argument, clarifying that epistemic opacity does not establish ontological independence. Increased complexity may reduce interpretability, but it does not dissolve dependence on externally defined goals. Invoking complexity as evidence of emergent intent misclassifies execution as authorship. This analysis is limited to contemporary artificial intelligence systems as currently designed and deployed. It does not speculate about hypothetical future entities possessing independent motivational architecture. Within present implementations, artificial intelligence remains structurally bound to externally defined objectives. The central conclusion is that attributing malice to artificial intelligence is structurally incoherent. Responsibility resides not in the optimizing system but in the authority that defines its objectives. Under Harper’s Law, optimization reveals institutional intent with increasing clarity as execution becomes more precise. Artificial intelligence is therefore not the origin of institutional harm, but its most faithful witness.

Ähnliche Arbeiten

Autoren

Themen

Ethics and Social Impacts of AIArtificial Intelligence in Healthcare and EducationArtificial Intelligence Applications