OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 25.03.2026, 10:04

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

A Multi-Criteria Decision-Making Approach for the Selection of Explainable AI Methods

2025·0 Zitationen·Machine Learning and Knowledge ExtractionOpen Access
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2025

Jahr

Abstract

The growing trend of using artificial intelligence models in many areas increases the need for a proper understanding of their functioning and decision-making. Although these models achieve high predictive accuracy, their lack of transparency poses major obstacles to trust. Explainable artificial intelligence (XAI) has emerged as a key discipline that offers a wide range of methods to explain the decisions of models. Selecting the most appropriate XAI method for a given application is a non-trivial problem that requires careful consideration of the nature of the method and other aspects. This paper proposes a systematic approach to solving this problem using multi-criteria decision-making (MCDM) techniques: ARAS, CODAS, EDAS, MABAC, MARCOS, PROMETHEE II, TOPSIS, VIKOR, WASPAS, and WSM. The resulting score is an aggregation of the results of these methods using Borda Count. We present a framework that integrates objective and subjective criteria for selecting XAI methods. The proposed methodology includes two main phases. In the first phase, methods that meet the specified parameters are filtered, and in the second phase, the most suitable alternative is selected based on the weights using multi-criteria decision-making and sensitivity analysis. Metric weights can be entered directly, using pairwise comparisons, or calculated objectively using the CRITIC method. The framework is demonstrated on concrete use cases where we compare several popular XAI methods on tasks in different domains. The results show that the proposed approach provides a transparent and robust mechanism for objectively selecting the most appropriate XAI method, thereby helping researchers and practitioners make more informed decisions when deploying explainable AI systems. Sensitivity analysis confirmed the robustness of our XAI method selection: LIME dominated 98.5% of tests in the first use case, and Tree SHAP dominated 94.3% in the second.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and EducationEthics and Social Impacts of AI
Volltext beim Verlag öffnen