Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Implementing Explainable AI to Enhance Business Decision Making & Bridging the Trust Gap
0
Zitationen
7
Autoren
2025
Jahr
Abstract
Artificial intelligence (AI) has recently witnessed unprecedented levels of growth in its use for decision-making processes. This trend has extended to all sectors of the global economy, promoting innovation and the need to automate business functions. However, the black-box nature of many AI systems has raised concerns relating to trust, transparency, and accountability. This paper investigates in detail the potential of Explainable AI (XAI) in addressing these legitimate concerns that come with AI integration. Through a systematic review of existing XAI techniques and their application in business analytics, we show that the shift toward the use of explainable models not only enhances decision-making but also addresses the trust issue that restrictive the growth of AI in the business world. The literature further addresses the moral issues regarding the decision to explain one's AI model, how firms should modify their decision-making processes to incorporate XAI and the related consequences of such a change. As such, organizations are in a position to reap the full benefits of AI by aligning AI models with the rationale and expectation of human beings without compromising accountability, fairness and transparency.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.336 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.241 Zit.
"Why Should I Trust You?"
2016 · 14.227 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.114 Zit.