OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 20.03.2026, 09:01

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

What Do People Really Want When They Say They Want "Explainable AI?" We Asked 60 Stakeholders.

2020·70 Zitationen
Volltext beim Verlag öffnen

70

Zitationen

1

Autoren

2020

Jahr

Abstract

This paper summarizes findings from a qualitative research effort aimed at understanding how various stakeholders characterize the problem of Explainable Artificial Intelligence (Explainable AI or XAI). During a nine-month period, the author conducted 40 interviews and 2 focus groups. An analysis of data gathered led to two significant initial findings: (1) current discourse on Explainable AI is hindered by a lack of consistent terminology; and (2) there are multiple distinct use cases for Explainable AI, including: debugging models, understanding bias, and building trust. These uses cases assume different user personas, will likely require different explanation strategies, and are not evenly addressed by current XAI tools. This stakeholder research supports a broad characterization of the problem of Explainable AI and can provide important context to inform the design of future capabilities.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AIExplainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen