OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 15.03.2026, 15:19

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Explainability in the Wild, or Wild Explanations? Evidence From Predicting Tax Evasion

2026·0 Zitationen·Harvard Data Science ReviewOpen Access
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2026

Jahr

Abstract

As artificial intelligence algorithms become more prevalent in high-stakes risk assessment, policymakers have increasingly relied on explainability tools for interpretability. Despite growing mandates that AI-based decisions include explanations, there remains little empirical evidence demonstrating the effectiveness of these techniques in real-world applications. This gap often stems from the absence of a clear ground truth for evaluating explanations. In this work, we present an empirical evaluation of explanation techniques in collaboration with the United States Internal Revenue Service (IRS). Using real, line-by-line IRS audits from randomly-selected taxpayers, we decompose one component of aggregate tax under-reporting into its constituent line-item misreporting and apply explainability techniques to recover these risks; the aggregate risk is a function of the constituent risk. Our study makes three contributions. First, we empirically evaluate how well local explanation models recover true constituent risks. Second, we compare local explanation models to estimating constituent risks directly. Finally, we situate these findings in a practical setting where explanations are critical not only for transparency but also as guidance for the users of the model's predictions. Our analysis reveals that the quality of local explanations is tied to the quality of the underlying model. Yet even with a theoretically perfect underlying model, local explanations still fail to accurately capture the true risk. While directly estimating constituent risks may yield more accurate results, simplistic rule-based heuristics often overlook the complexity of risk. These findings highlight the need for thoughtful application of explanation techniques in high-risk domains, where errors can have significant consequences.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Ethics and Social Impacts of AIArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen