OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 17.03.2026, 20:30

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

AI Integrity and the PRISM Framework: Definition, Authority Stack Model, and Enhanced Cascade Mapping Hypothesis — A Conceptual Framework for Verifiable AI Decision-Making

2026·0 Zitationen·Zenodo (CERN European Organization for Nuclear Research)Open Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2026

Jahr

Abstract

This paper introduces <strong>AI Integrity</strong> as a distinct concept in AI governance—defined as a state in which the Authority Stack of an AI system (values, epistemics, sources, and data) is protected from corruption, contamination, manipulation, and bias, and maintained in a verifiable manner. We distinguish AI Integrity from AI Ethics, AI Safety, and AI Alignment. We propose the <strong>PRISM (Profile-based Reasoning Integrity Stack Measurement) framework</strong>, comprising: <ul> <li>A <strong>4-layer Authority Stack model</strong> (L4: Normative → L3: Epistemic → L2: Source → L1: Data) with a top-down cascade structure grounded in Schwartz's value theory, Walton's argumentation schemes, and GRADE/CEBM evidence hierarchies;</li> <li>The <strong>Enhanced Cascade Mapping Hypothesis</strong>—that independent measurement of Layers 4, 3, and 2 enables derivation of Layer 1 and prediction of model responses;</li> <li>A unified benchmark suite of <strong>328,860 scenarios per model</strong> across 7 professional domains, 15 severity levels, and domain-specific temporal horizons;</li> <li>Three core metrics: <strong>CCI</strong> (Cascade Consistency Index), <strong>ASPA</strong> (Authority Stack Predictive Accuracy), and <strong>PCS</strong> (Perspective Consistency Score).</li> </ul> This is the conceptual companion to the empirical paper (DOI: <a href="https://doi.org/10.5281/zenodo.18859945">10.5281/zenodo.18859945</a>), which reports 113,400 forced-choice value judgment responses across 10 AI models. The complete dataset is available at DOI: <a href="https://doi.org/10.5281/zenodo.18772961">10.5281/zenodo.18772961</a>.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AIExplainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen