Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Rise of Explainable AI: Enhancing Transparency and Trust in Machine Learning Models
0
Zitationen
6
Autoren
2025
Jahr
Abstract
Explainable Artificial Intelligence (XAI) seeks to reduce the transparency gap in modern AI and machine learning systems, particularly in high-stakes applications such as healthcare, finance, and legal decision-making. This review compares established interpretability methods, including SHAP and LIME, with emerging approaches such as perturbation-based and self-explainable models, evaluating their suitability for medical imaging, credit scoring, and regulatory compliance. The study also examines the evolving regulatory landscape, including the European Union AI Act, the implications of the General Data Protection Regulation (GDPR), and the U.S. Food and Drug Administration (FDA) guidance on AI-based medical devices. In addition, key ethical challenges related to transparency, accountability, and fairness are discussed. Although both post-hoc explanation techniques and intrinsically interpretable models have achieved significant progress, critical challenges remain. These include computational complexity, the accuracy–interpretability trade-off, diverse stakeholder requirements for explanations, and the lack of standardized evaluation metrics. Addressing these issues will require interdisciplinary collaboration across technical research, cognitive science, legal frameworks, and domain-specific expertise.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.311 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.238 Zit.
"Why Should I Trust You?"
2016 · 14.210 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.104 Zit.