Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AI Integrity and the PRISM Framework: Definition, Authority Stack Model, and Enhanced Cascade Mapping Hypothesis — A Conceptual Framework for Verifiable AI Decision-Making
0
Zitationen
1
Autoren
2026
Jahr
Abstract
This paper introduces <strong>AI Integrity</strong> as a distinct concept in AI governance—defined as a state in which the Authority Stack of an AI system (values, epistemics, sources, and data) is protected from corruption, contamination, manipulation, and bias, and maintained in a verifiable manner. We distinguish AI Integrity from AI Ethics, AI Safety, and AI Alignment. We propose the <strong>PRISM (Profile-based Reasoning Integrity Stack Measurement) framework</strong>, comprising: <ul> <li>A <strong>4-layer Authority Stack model</strong> (L4: Normative → L3: Epistemic → L2: Source → L1: Data) with a top-down cascade structure grounded in Schwartz's value theory, Walton's argumentation schemes, and GRADE/CEBM evidence hierarchies;</li> <li>The <strong>Enhanced Cascade Mapping Hypothesis</strong>—that independent measurement of Layers 4, 3, and 2 enables derivation of Layer 1 and prediction of model responses;</li> <li>A unified benchmark suite of <strong>328,860 scenarios per model</strong> across 7 professional domains, 15 severity levels, and domain-specific temporal horizons;</li> <li>Three core metrics: <strong>CCI</strong> (Cascade Consistency Index), <strong>ASPA</strong> (Authority Stack Predictive Accuracy), and <strong>PCS</strong> (Perspective Consistency Score).</li> </ul> This is the conceptual companion to the empirical paper (DOI: <a href="https://doi.org/10.5281/zenodo.18859945">10.5281/zenodo.18859945</a>), which reports 113,400 forced-choice value judgment responses across 10 AI models. The complete dataset is available at DOI: <a href="https://doi.org/10.5281/zenodo.18772961">10.5281/zenodo.18772961</a>.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.504 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.856 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.378 Zit.
Fairness through awareness
2012 · 3.267 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.182 Zit.