OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 30.03.2026, 00:12

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Beyond Trust: A Causal Approach to Explainable AI in Law Enforcement

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2025

Jahr

Abstract

An AI can process a million clues in a second. But what if its most critical clue is just a hidden bias? The increased integration of opaque Artificial Intelligence (AI) systems into high-stakes domains, such as law enforcement, presents critical challenges to accountability, fairness, and public trust. While Explainable AI (XAI) aims to mitigate this "black-box" problem, many current methods provide superficial, correlational insights that are insufficient for environments demanding verifiable understanding. This dissertation argues for a fundamental shift Beyond Trust: a move from accepting superficial explanations to demanding rigorous, verifiable insights. From using unevaluated methods to requiring robust validation, and from developing technical tools to ensuring they are operationalized for practitioners. This work contributes to this shift by developing and validating a suite of causality-grounded techniques and context-aware tools designed to foster genuinely understandable and accountable AI. The research is structured around three core challenges: (1) the lack of contextual awareness in XAI development, (2) the limited explanatory power and evaluation of current techniques, and (3) significant usability barriers for practitioners. The dissertation first establishes a set of guiding desiderata for XAI in law enforcement (Chapter 2), emphasizing versatility, an appropriate focus on NLP, and a grounding in causal reasoning. The dissertation's primary technical contributions (Chapters 3-5) address the challenge of explanatory power by introducing three novel artifacts. First, CounterfactualGAN generates realistic and plausible local counterfactual explanations ("what-if" scenarios) for NLP models, validated through both functional metrics (including a novel perceptibility score) and human-grounded evaluations of naturalness. Second, Global Causal Analysis (GCA) constructs an explanatory causal graph of a model's global behavior, uniquely inferring high-level features from raw text. It is validated via a novel three-step framework including Z-fidelity metrics and expert-driven sanity checks. Third, the Causality for XAI (C4X) Framework provides a conceptual guide for the rigorous application of Causal Effect Estimation (CEE) in XAI. It contributes a structured process to ensure transparency and prevent the misapplication of causal methods, serving as a roadmap for researchers to apply rigorous, multi-faceted evaluations to their own causal explanations. To bridge the gap between research and practice, the Explabox (Chapter 6) operationalizes these principles into an open-source, user-friendly software toolkit for NLP. It integrates XAI, fairness, robustness, and security analyses into a structured audit workflow. Its utility was validated through an application-grounded design process with the Netherlands National Police. The dissertation concludes (Chapter 7) that these contributions successfully advance the Beyond Trust paradigm, discussing the work's theoretical, practical, and societal implications, as well as limitations and an agenda for future research.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Ethics and Social Impacts of AIArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen