Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AI-Assisted Sentencing Modeling Under Explainability Constraints: Framework Design and Judicial Applicability Analysis
0
Zitationen
2
Autoren
2026
Jahr
Abstract
The integration of artificial intelligence into criminal sentencing decisions represents one of the most consequential applications of algorithmic systems in contemporary governance. While AI-assisted risk assessment tools promise enhanced consistency and predictive accuracy, their deployment in judicial contexts raises profound concerns regarding transparency, due process, and fundamental rights. This paper proposes a comprehensive framework for AI-assisted sentencing modeling that embeds explainability as a foundational constraint rather than an afterthought. Drawing upon the landmark State v. Loomis decision, empirical analyses of the COMPAS algorithm, and emerging regulatory frameworks including the European Union Artificial Intelligence Act, we examine the tension between predictive performance and interpretive transparency. Our framework integrates a three-layer explanation architecture: inherent interpretability through generalized additive models (GA2Ms) providing transparent global structure, exact local feature attribution derived directly from the additive model decomposition without approximation, and counterfactual reasoning that identifies minimal input changes altering risk classifications. We demonstrate through rigorous experimental validation on the ProPublica COMPAS dataset (n = 6172) that explainability-constrained models achieve comparable predictive validity to opaque alternatives (AUC 0.71 versus 0.70–0.72 for black-box methods) while satisfying constitutional due process requirements and emerging regulatory mandates under the EU Artificial Intelligence Act. The impossibility theorems governing algorithmic fairness are examined in light of their implications for sentencing equity, and we propose that transparent model architectures enable targeted interventions unavailable when decision logic remains concealed. The paper concludes with policy guidance for jurisdictions seeking to implement AI-assisted sentencing systems that balance public safety objectives with procedural fairness and individual rights.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.514 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.859 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.386 Zit.
Fairness through awareness
2012 · 3.269 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.183 Zit.