OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 07.05.2026, 08:50

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

AI Proof Layer: An Outcome-Assurance Architecture for Reliable, Safe, and Auditable AI Models

2026·0 Zitationen·Journal of Artificial Intelligence & Cloud Computing
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2026

Jahr

Abstract

Large Language Models (LLMs) deliver exceptional generative capability, but they share a structural limitation: they cannot prove that a given output is correct, grounded in authoritative evidence, or compliant with policy. As a result, hallucinations, unverifiable claims, and policy violations persist, especially in high-risk settings such as finance, healthcare, legal reasoning, and enterprise operations. This paper introduces AI Proof Layer, an external outcome-assurance layer that operates independently of the model. AI Proof Layer evaluates model outputs against explicit, measurable guarantees (“claims”), enforces ALLOW/BLOCK decisions, and generates immutable Evidence Packs suitable for audits, incident reviews, and regulatory reporting, without modifying or constraining the underlying model architecture. By separating generation (the model) from permission (AI Proof Layer), the system converts generative AI models from a probabilistic text generator into a certifiable decision system. We present the conceptual framework, reference architecture, decision contract, example workflows, and compliance mappings that enable organizations to reduce hallucinations and establish traceable accountability across the AI lifecycle.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Adversarial Robustness in Machine LearningArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen