Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable AI in High-Stakes Domains: Improving Trust, Transparency, And Accountability in Automated Decision-Making
0
Zitationen
1
Autoren
2026
Jahr
Abstract
The growing use of artificial intelligence in high-stakes fields like healthcare, finance, and the state government has become a significant focus of concern in terms of trust, transparency, and accountability in automated systems of decision-making. Explainable Artificial Intelligence (XAI) has become one of the primary solutions to reducing the constraints of opaque black box models by making them more interpretable and allowing human-level supervision. This paper analyzes the theoretical base, governance systems, and socio-technical consequences of explainable AI and provides a synthesis of the interdisciplinary literature on explainability in order to assess the value of explainability in the adoption of trustworthy AI. Through a systematic literature review approach, the study finds out fundamental dimensions between explainability and user trust, ethical governance, and organizational accountability. The results indicate the need to combine technical transparency and human-friendly design to enhance the legitimacy of decisions and responsible AI implementation in highly risky, but complex settings.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.929 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.356 Zit.
"Why Should I Trust You?"
2016 · 14.688 Zit.
Generative adversarial networks
2020 · 13.316 Zit.