Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AI Proof Layer: An Outcome-Assurance Architecture for Reliable, Safe, and Auditable AI Models
0
Zitationen
1
Autoren
2026
Jahr
Abstract
Large Language Models (LLMs) deliver exceptional generative capability, but they share a structural limitation: they cannot prove that a given output is correct, grounded in authoritative evidence, or compliant with policy. As a result, hallucinations, unverifiable claims, and policy violations persist, especially in high-risk settings such as finance, healthcare, legal reasoning, and enterprise operations. This paper introduces AI Proof Layer, an external outcome-assurance layer that operates independently of the model. AI Proof Layer evaluates model outputs against explicit, measurable guarantees (“claims”), enforces ALLOW/BLOCK decisions, and generates immutable Evidence Packs suitable for audits, incident reviews, and regulatory reporting, without modifying or constraining the underlying model architecture. By separating generation (the model) from permission (AI Proof Layer), the system converts generative AI models from a probabilistic text generator into a certifiable decision system. We present the conceptual framework, reference architecture, decision contract, example workflows, and compliance mappings that enable organizations to reduce hallucinations and establish traceable accountability across the AI lifecycle.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.886 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.349 Zit.
"Why Should I Trust You?"
2016 · 14.661 Zit.
Generative adversarial networks
2020 · 13.286 Zit.