OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 17.03.2026, 15:55

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

The Rise of Explainable AI: Enhancing Transparency and Trust in Machine Learning Models

2025·0 Zitationen·Zenodo (CERN European Organization for Nuclear Research)Open Access
Volltext beim Verlag öffnen

0

Zitationen

6

Autoren

2025

Jahr

Abstract

Explainable Artificial Intelligence (XAI) seeks to reduce the transparency gap in modern AI and machine learning systems, particularly in high-stakes applications such as healthcare, finance, and legal decision-making. This review compares established interpretability methods, including SHAP and LIME, with emerging approaches such as perturbation-based and self-explainable models, evaluating their suitability for medical imaging, credit scoring, and regulatory compliance. The study also examines the evolving regulatory landscape, including the European Union AI Act, the implications of the General Data Protection Regulation (GDPR), and the U.S. Food and Drug Administration (FDA) guidance on AI-based medical devices. In addition, key ethical challenges related to transparency, accountability, and fairness are discussed. Although both post-hoc explanation techniques and intrinsically interpretable models have achieved significant progress, critical challenges remain. These include computational complexity, the accuracy–interpretability trade-off, diverse stakeholder requirements for explanations, and the lack of standardized evaluation metrics. Addressing these issues will require interdisciplinary collaboration across technical research, cognitive science, legal frameworks, and domain-specific expertise.

Ähnliche Arbeiten