Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Exploring Explainable Artificial Intelligence for Transparent Decision Making
29
Zitationen
7
Autoren
2023
Jahr
Abstract
Artificial intelligence (AI) has become a potent tool in many fields, allowing complicated tasks to be completed with astounding effectiveness. However, as AI systems get more complex, worries about their interpretability and transparency have become increasingly prominent. It is now more important than ever to use Explainable Artificial Intelligence (XAI) methodologies in decision-making processes, where the capacity to comprehend and trust AI-based judgments is crucial. This abstract explores the idea of XAI and how important it is for promoting transparent decision-making. Finally, the development of Explainable Artificial Intelligence (XAI) has shown to be crucial for promoting clear decision-making in AI systems. XAI approaches close the cognitive gap between complicated algorithms and human comprehension by empowering users to comprehend and analyze the inner workings of AI models. XAI equips stakeholders to evaluate and trust AI systems, assuring fairness, accountability, and ethical standards in fields like healthcare and finance where AI-based choices have substantial ramifications. The development of XAI is essential for attaining AI's full potential while retaining transparency and human-centric decision making, despite ongoing hurdles.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.299 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.236 Zit.
"Why Should I Trust You?"
2016 · 14.198 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.098 Zit.