Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Making AI Explainable in the Global South: A Systematic Review
43
Zitationen
3
Autoren
2022
Jahr
Abstract
Artificial intelligence (AI) and machine learning (ML) are quickly becoming pervasive in ways that impact the lives of all humans across the globe. In an effort to make otherwise ”black box” AI/ML systems more understandable, the field of Explainable AI (XAI) has arisen with the goal of developing algorithms, toolkits, frameworks, and other techniques that enable people to comprehend, trust, and manage AI systems. However, although XAI is a rapidly growing area of research, most of the work has focused on contexts in the Global North, and little is known about if or how XAI techniques have been designed, deployed, or tested with communities in the Global South. This gap is concerning, especially in light of rapidly growing enthusiasm from governments, companies, and academics to use AI/ML to “solve” problems in the Global South. Our paper contributes the first systematic review of XAI research in the Global South, providing an early look at emerging work in the space. We identified 16 papers from 15 different venues that targeted a wide range of application domains. All of the papers were published in the last three years. Of the 16 papers, 13 focused on applying a technical XAI method, all of which involved the use of (at least some) data that was local to the context. However, only three papers engaged with or involved humans in the work, and only one attempted to deploy their XAI system with target users. We close by reflecting on the current state of XAI research in the Global South, discussing data and model considerations for building and deploying XAI systems in these regions, and highlighting the need for human-centered approaches to XAI in the Global South.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.615 Zit.
Generative Adversarial Nets
2023 · 19.894 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.306 Zit.
"Why Should I Trust You?"
2016 · 14.446 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.171 Zit.