Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Interpretable Machine Learning Meets Statistical Inference: A Comprehensive Review of Integration Methods, Challenges, and Future Directions
0
Zitationen
1
Autoren
2026
Jahr
Abstract
With the widespread deployment of machine learning models in high-stakes decision-making contexts, their inherent opacityoften termed the "black-box" problemhas raised significant concerns regarding interpretability and reliability. This paper presents a systematic and comprehensive literature review examining the convergence of interpretable machine learning and statistical inference. This paper synthesizes foundational concepts, methodological frameworks, theoretical advancements, and practical applications to elucidate how statistical tools can validate, enhance, and formalize machine learning explanations. This review critically analyzes widely adopted techniques such as SHAP and LIME, and explores their integration with statistical inference tools, including hypothesis testing, confidence intervals, Bayesian methods, and causal inference frameworks. The analysis reveals that integrated approaches significantly improve explanation credibility, regulatory compliance, and decision transparency in critical domains, including healthcare diagnostics, financial risk management, and algorithmic governance. However, persistent challenges remain in theoretical consistency, computational efficiency, evaluation standardization, and human-centered design. This paper concludes by proposing a structured research agenda focusing on unified theoretical frameworks, efficient algorithmic implementations, domain-specific evaluation standards, and interdisciplinary collaboration strategies to advance the responsible development and deployment of explainable AI systems.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.310 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.238 Zit.
"Why Should I Trust You?"
2016 · 14.210 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.104 Zit.