Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Toward Transparent Optimization: A Systematic Review of Explainable AI in Decision-Making Systems
0
Zitationen
4
Autoren
2025
Jahr
Abstract
The increasing reliance on artificial intelligence (AI) for high-stakes decision-making has heightened the need for systems that prioritize not only accuracy but also interpretability and transparency. Although optimization techniques—such as metaheuristics, mathematical programming, and reinforcement learning—have significantly propelled the development of intelligent systems, their inherent black-box characteristics often hinder trust, accountability, and effective human-AI interaction. This article presents a comprehensive systematic review of the emerging intersection between explainable AI (XAI) and optimization. We explore how interpretability is being systematically incorporated into optimization-driven decision-making pipelines across a variety of application domains. The study offers a critical analysis and classification of existing research, focusing on the integration of XAI methods (e.g., SHAP, LIME, saliency maps) with optimization strategies (e.g., genetic algorithms, simulated annealing, mixed-integer linear programming, and reinforcement learning-based methods). These integrations are examined across sectors such as healthcare, finance, logistics, and energy systems. A structured taxonomy is introduced to categorize hybrid approaches according to their level of explainability, optimization complexity, and domain specificity. In addition, the review highlights key challenges in the field, including the trade-off between performance and interpretability, the absence of standardized benchmarks, and issues related to model scalability. Finally, we outline promising research directions such as the development of explainable hyper-heuristics, domain-adaptable interpretable solvers, and AI frameworks aligned with regulatory standards. By synthesizing this evolving body of knowledge, the article aims to serve as a foundational resource for researchers and practitioners striving to build transparent, trustworthy, and effective optimization-based AI systems
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.336 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.241 Zit.
"Why Should I Trust You?"
2016 · 14.227 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.114 Zit.