Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A Maturity Model for Practical Explainability in Artificial Intelligence‐Based Applications: Integrating Analysis and Evaluation (MM4XAI‐AE) Models
1
Zitationen
4
Autoren
2025
Jahr
Abstract
The increasing adoption of artificial intelligence (AI) in critical domains such as healthcare, law, and defense demands robust mechanisms to ensure transparency and explainability in decision‐making processes. While machine learning and deep learning algorithms have advanced significantly, their growing complexity presents persistent interpretability challenges. Existing maturity frameworks, such as Capability Maturity Model Integration, fall short in addressing the distinct requirements of explainability in AI systems, particularly where ethical compliance and public trust are paramount. To address this gap, we propose the Maturity Model for eXplainable Artificial Intelligence: Analysis and Evaluation (MM4XAI‐AE), a domain‐agnostic maturity model tailored to assess and guide the practical deployment of explainability in AI‐based applications. The model integrates two complementary components: an analysis model and an evaluation model, structured across four maturity levels—operational, justified, formalized, and managed. It evaluates explainability across three critical dimensions: technical foundations, structured design, and human‐centered explainability. MM4XAI‐AE is grounded in the PAG‐XAI framework, emphasizing the interrelated dimensions of practicality, auditability, and governance, thereby aligning with current reflections on responsible and trustworthy AI. The MM4XAI‐AE model is empirically validated through a structured evaluation of thirteen published AI applications from diverse sectors, analyzing their design and deployment practices. The results show a wide distribution across maturity levels, underscoring the model’s capacity to identify strengths, gaps, and actionable pathways for improving explainability. This work offers a structured and scalable framework to standardize explainability practices and supports researchers, developers, and policymakers in fostering more transparent, ethical, and trustworthy AI systems.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.246 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.228 Zit.
"Why Should I Trust You?"
2016 · 14.150 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.091 Zit.