OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 15.05.2026, 17:38

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Explanation in Artificial Intelligence: Insights from the Social\n Sciences

2017·30 Zitationen·arXiv (Cornell University)Open Access
Volltext beim Verlag öffnen

30

Zitationen

1

Autoren

2017

Jahr

Abstract

There has been a recent resurgence in the area of explainable artificial\nintelligence as researchers and practitioners seek to make their algorithms\nmore understandable. Much of this research is focused on explicitly explaining\ndecisions or actions to a human observer, and it should not be controversial to\nsay that looking at how humans explain to each other can serve as a useful\nstarting point for explanation in artificial intelligence. However, it is fair\nto say that most work in explainable artificial intelligence uses only the\nresearchers' intuition of what constitutes a `good' explanation. There exists\nvast and valuable bodies of research in philosophy, psychology, and cognitive\nscience of how people define, generate, select, evaluate, and present\nexplanations, which argues that people employ certain cognitive biases and\nsocial expectations towards the explanation process. This paper argues that the\nfield of explainable artificial intelligence should build on this existing\nresearch, and reviews relevant papers from philosophy, cognitive\npsychology/science, and social psychology, which study these topics. It draws\nout some important findings, and discusses ways that these can be infused with\nwork on explainable artificial intelligence.\n

Ähnliche Arbeiten

Autoren

Themen

Explainable Artificial Intelligence (XAI)Adversarial Robustness in Machine LearningMachine Learning in Healthcare
Volltext beim Verlag öffnen