Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable AI: Enhancing Transparency and Trust in Machine Learning
0
Zitationen
3
Autoren
2026
Jahr
Abstract
Artificial Intelligence (AI) has become widely used for solving complex problems across various domains. However, many advanced AI models, especially deep learning systems, operate as "black boxes," where the decision-making process is not transparent. This lack of interpretability raises concerns about trust, accountability, and ethical use, particularly in critical areas such as healthcare and finance. Explainable Artificial Intelligence (XAI) addresses these challenges by making AI models more transparent and understandable. This paper presents a study of XAI techniques, focusing on methods such as LIME (Local Interpretable Model-Agnostic Explanations) and SHAP (Shapley Additive explanations). These techniques help explain how input features influence predictions, providing both local and global interpretability. The methodology involves training a machine learning model and applying XAI techniques to analyse its predictions. The results show that LIME provides simple local explanations, while SHAP offers more consistent and detailed insights into feature importance. These approaches improve trust, transparency, and help identify potential biases in AI systems. Despite its advantages, XAI faces challenges such as computational cost and trade-offs between accuracy and interpretability. Overall, XAI is essential for developing reliable, ethical, and human-centered AI systems.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.962 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.358 Zit.
"Why Should I Trust You?"
2016 · 14.704 Zit.
Generative adversarial networks
2020 · 13.328 Zit.