Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Exploring the means to measure explainability: Metrics, heuristics and questionnaires
6
Zitationen
4
Autoren
2025
Jahr
Abstract
As the complexity of modern software is steadily growing, these systems become increasingly difficult to understand for their stakeholders. At the same time, opaque and artificially intelligent systems permeate a growing number of safety-critical areas, such as medicine and finance. As a result, explainability is becoming more important as a software quality aspect and non-functional requirement. Contemporary research has mainly focused on making artificial intelligence and its decision-making processes more understandable. However, explainability has also gained traction in recent requirements engineering research. This work aims to contribute to that body of research by providing a quality model for explainability as a software quality aspect. Quality models provide means and measures to specify and evaluate quality requirements. In order to design a user-centered quality model for explainability, we conducted a literature review. We identified ten fundamental aspects of explainability. Furthermore, we aggregated criteria and metrics to measure them as well as alternative means of evaluation in the form of heuristics and questionnaires. Our quality model and the related means of evaluation enable software engineers to develop and validate explainable systems in accordance with their explainability goals and intentions. This is achieved by offering a view from different angles at fundamental aspects of explainability and the related development goals. Thus, we provide a foundation that improves the management and verification of explainability requirements. • Literature review on criteria and measures for explainability. • Quality model for explainability including ten aspects of explainability. • User-centered metrics, heuristics and questionnaires to evaluate explainability.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.310 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.238 Zit.
"Why Should I Trust You?"
2016 · 14.210 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.104 Zit.