Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Mediation Challenges and Socio-Technical Gaps for Explainable Deep\n Learning Applications
4
Zitationen
6
Autoren
2019
Jahr
Abstract
The presumed data owners' right to explanations brought about by the General\nData Protection Regulation in Europe has shed light on the social challenges of\nexplainable artificial intelligence (XAI). In this paper, we present a case\nstudy with Deep Learning (DL) experts from a research and development\nlaboratory focused on the delivery of industrial-strength AI technologies. Our\naim was to investigate the social meaning (i.e. meaning to others) that DL\nexperts assign to what they do, given a richly contextualized and familiar\ndomain of application. Using qualitative research techniques to collect and\nanalyze empirical data, our study has shown that participating DL experts did\nnot spontaneously engage into considerations about the social meaning of\nmachine learning models that they build. Moreover, when explicitly stimulated\nto do so, these experts expressed expectations that, with real-world DL\napplication, there will be available mediators to bridge the gap between\ntechnical meanings that drive DL work, and social meanings that AI technology\nusers assign to it. We concluded that current research incentives and values\nguiding the participants' scientific interests and conduct are at odds with\nthose required to face some of the scientific challenges involved in advancing\nXAI, and thus responding to the alleged data owners' right to explanations or\nsimilar societal demands emerging from current debates. As a concrete\ncontribution to mitigate what seems to be a more general problem, we propose\nthree preliminary XAI Mediation Challenges with the potential to bring together\ntechnical and social meanings of DL applications, as well as to foster much\nneeded interdisciplinary collaboration among AI and the Social Sciences\nresearchers.\n
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.284 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.233 Zit.
"Why Should I Trust You?"
2016 · 14.179 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.096 Zit.