OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.03.2026, 00:29

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

A social evaluation of the perceived goodness of explainability in machine learning

2021·16 Zitationen·Journal of Business Analytics
Volltext beim Verlag öffnen

16

Zitationen

4

Autoren

2021

Jahr

Abstract

Machine learning in decision support systems already outperforms pre-existing statistical methods. However, their predictions face challenges as calculations are often complex and not all model predictions are traceable. In fact, many well-performing models are black boxes to the user who– consequently– cannot interpret and understand the rationale behind a model’s prediction. Explainable artificial intelligence has emerged as a field of study to counteract this. However, current research often neglects the human factor. Against this backdrop, we derived and examined factors that influence the goodness of a model’s explainability in a social evaluation of end users. We implemented six common ML algorithms for four different benchmark datasets in a two-factor factorial design and asked potential end users to rate different factors in a survey. Our results show that the perceived goodness of explainability is moderated by the problem type and strongly correlates with trustworthiness as the most important factor.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Ethics and Social Impacts of AIArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen