Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable Artificial Intelligence and Data Canyons in the Context of Cybernetics
0
Zitationen
4
Autoren
2025
Jahr
Abstract
Explainable artificial intelligence (XAI) is gaining popularity and traction because of the popularity and performance of machine learning (ML) and artificial intelligence (AI), especially when it comes to deep learning (DL) and large language models (LLM). These advancements present opportunities to improve workflows and systems across different domains, but the lack of explainability in AI decision-making presents significant challenges. Data Canyons, a new explainable algorithm, addresses those problems by adding interpretation and explanation layers to AI models, and with this enhancing trust and usability. The feasibility of Data Canyons was tested both as a standalone ML algorithm and as an interpretative layer for other models, compared with state-of-the-art XAI solutions. Human decision-making tests demonstrated that Data Canyons are reliable, understandable, and easy to interpret, particularly through the visualisation layer, which is accessible without expert training. One unique aspect of Data Canyons is the inherent ability to provide local and global explanation layers. Data Canyons can be utilised as a standalone complete ML solution where transparency is a key factor or as a supportive mechanism for other algorithms. XAI is a key factor in systemic solutions that focus on health, safety, and well-being in general, as the success and feasibility of the integration of AI solutions highly depend on it. Data Canyons present a well-rounded approach to XAI with a wide range of applications. However, further integration into sophisticated AI and ML tools and architectural translation for the purpose of high-performance computing is still needed to allow for wide-range adoption.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.474 Zit.
Generative Adversarial Nets
2023 · 19.843 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.262 Zit.
"Why Should I Trust You?"
2016 · 14.326 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.143 Zit.