OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 15.03.2026, 06:08

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Assessing Collaborative Explanations of AI using Explanation Goodness Criteria

2021·5 Zitationen·Proceedings of the Human Factors and Ergonomics Society Annual MeetingOpen Access
Volltext beim Verlag öffnen

5

Zitationen

5

Autoren

2021

Jahr

Abstract

Explainable AI represents an increasingly important category of systems that attempt to support human understanding and trust in machine intelligence and automation. Typical systems rely on algorithms to help understand underlying information about decisions and establish justified trust and reliance. Researchers have proposed using goodness criteria to measure the quality of explanations as a formative evaluation of an XAI system, but these criteria have not been systematically investigated in the literature. To explore this, we present a novel collaborative explanation system (CXAI) and propose several goodness criteria to evaluate the quality of its explanations. Results suggest that the explanations provided by this system are typically correct, informative, written in understandable ways, and focus on explanation of larger scale data patterns than are typically generated by algorithmic XAI systems. Implications for how these criteria may be applied to other XAI systems are discussed.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Topic ModelingArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen