OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 11.03.2026, 14:06

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Recent Emerging Techniques in Explainable Artificial Intelligence to Enhance the Interpretable and Understanding of AI Models for Human

2025·82 Zitationen·Neural Processing LettersOpen Access
Volltext beim Verlag öffnen

82

Zitationen

5

Autoren

2025

Jahr

Abstract

Recent advancements in Explainable Artificial Intelligence (XAI) aim to bridge the gap between complex artificial intelligence (AI) models and human understanding, fostering trust and usability in AI systems. However, challenges persist in comprehensively interpreting these models, hindering their widespread adoption. This study addresses these challenges by exploring recently emerging techniques in XAI. The primary problem addressed is the lack of transparency and interpretability in AI models to humanity for institution-wide use, which undermines user trust and inhibits their integration into critical decision-making processes. Through an in-depth review, this study identifies the objectives of enhancing the interpretability of AI models and improving human understanding of their decision-making processes. Various methodological approaches, including post-hoc explanations, model transparency methods, and interactive visualization techniques, are investigated to elucidate AI model behaviours. We further present techniques and methods to make AI models more interpretable and understandable to humans including their strengths and weaknesses to demonstrate promising advancements in model interpretability, facilitating better comprehension of complex AI systems by humans. In addition, we provide the application of XAI in local use cases. Challenges, solutions, and open research directions were highlighted to clarify these compelling XAI utilization challenges. The implications of this research are profound, as enhanced interpretability fosters trust in AI systems across diverse applications, from healthcare to finance. By empowering users to understand and scrutinize AI decisions, these techniques pave the way for more responsible and accountable AI deployment.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Machine Learning in HealthcareArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen