OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 20.03.2026, 04:26

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Implementing Explainable AI to Enhance Business Decision Making & Bridging the Trust Gap

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

7

Autoren

2025

Jahr

Abstract

Artificial intelligence (AI) has recently witnessed unprecedented levels of growth in its use for decision-making processes. This trend has extended to all sectors of the global economy, promoting innovation and the need to automate business functions. However, the black-box nature of many AI systems has raised concerns relating to trust, transparency, and accountability. This paper investigates in detail the potential of Explainable AI (XAI) in addressing these legitimate concerns that come with AI integration. Through a systematic review of existing XAI techniques and their application in business analytics, we show that the shift toward the use of explainable models not only enhances decision-making but also addresses the trust issue that restrictive the growth of AI in the business world. The literature further addresses the moral issues regarding the decision to explain one's AI model, how firms should modify their decision-making processes to incorporate XAI and the related consequences of such a change. As such, organizations are in a position to reap the full benefits of AI by aligning AI models with the rationale and expectation of human beings without compromising accountability, fairness and transparency.

Ähnliche Arbeiten