Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable artificial intelligence: tools, platforms, and new taxonomies
1
Zitationen
5
Autoren
2023
Jahr
Abstract
Recent advances in machine learning (ML) strategies have introduced several artificial intelligence (AI)-based systems. These AI systems have the capability to perceive, learn, smartly decide, and act quickly on the given situation. Apparently, this is the requirement from such systems but after witnessing their performance, it has been noticed that these systems are unable to explain their actuation to users (humans). This constraint has been taken into consideration by several researchers later, after all this is the main thing required to make our autonomous systems more intelligent and robust. At this instant, researchers felt the need for explainable AI (XAI) that may make the verifiability of taken decision essential. This will increase the demand for an ability to question, understand, and above all generate a trust level over artificial intelligence systems. There are several models but still there is no consensus on the assessment of explainability. Thus, this chapter presents a comprehensive review of current state-of-the-art over the XAI that have a societal impact. In addition to this, one may find the drivers and tools for XAI. Last but certainly not the least is the complete literature review that provides the future research directions for researchers in this area.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.305 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.236 Zit.
"Why Should I Trust You?"
2016 · 14.204 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.103 Zit.