Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable Artificial Intelligence: Analysis of Methodologies and Applications
0
Zitationen
2
Autoren
2025
Jahr
Abstract
The lack of transparency and explainability in machine learning models, often referred to as "black boxes," presents a significant challenge that undermines trust and decision-making in critical applications such as medicine, finance, and security.This study examines the necessity of improving explainability by evaluating recent advancements in explainability techniques, comparing them to earlier approaches, and assessing their impact on both theory and practice. Through a comprehensive literature review, current methodologies were identified, categorized, and evaluated based on their effectiveness and practical applications. The findings highlight the importance of well-established techniques like LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations), alongside novel approaches such as SAMCNet and entropy-based methods, for their ability to provide clearer and more understandable explanations. However, significant challenges remain, including the need for model-agnostic XAI (Explainable Artificial Intelligence) techniques that can be generalized across different contexts. These findings emphasize the ongoing importance of research in this field to enhance transparency and trust in AI.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.311 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.238 Zit.
"Why Should I Trust You?"
2016 · 14.210 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.104 Zit.