OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 19.03.2026, 07:21

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

AI-Assisted Sentencing Modeling Under Explainability Constraints: Framework Design and Judicial Applicability Analysis

2026·0 Zitationen·InformationOpen Access
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2026

Jahr

Abstract

The integration of artificial intelligence into criminal sentencing decisions represents one of the most consequential applications of algorithmic systems in contemporary governance. While AI-assisted risk assessment tools promise enhanced consistency and predictive accuracy, their deployment in judicial contexts raises profound concerns regarding transparency, due process, and fundamental rights. This paper proposes a comprehensive framework for AI-assisted sentencing modeling that embeds explainability as a foundational constraint rather than an afterthought. Drawing upon the landmark State v. Loomis decision, empirical analyses of the COMPAS algorithm, and emerging regulatory frameworks including the European Union Artificial Intelligence Act, we examine the tension between predictive performance and interpretive transparency. Our framework integrates a three-layer explanation architecture: inherent interpretability through generalized additive models (GA2Ms) providing transparent global structure, exact local feature attribution derived directly from the additive model decomposition without approximation, and counterfactual reasoning that identifies minimal input changes altering risk classifications. We demonstrate through rigorous experimental validation on the ProPublica COMPAS dataset (n = 6172) that explainability-constrained models achieve comparable predictive validity to opaque alternatives (AUC 0.71 versus 0.70–0.72 for black-box methods) while satisfying constitutional due process requirements and emerging regulatory mandates under the EU Artificial Intelligence Act. The impossibility theorems governing algorithmic fairness are examined in light of their implications for sentencing equity, and we propose that transparent model architectures enable targeted interventions unavailable when decision logic remains concealed. The paper concludes with policy guidance for jurisdictions seeking to implement AI-assisted sentencing systems that balance public safety objectives with procedural fairness and individual rights.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AIExplainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen