Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Mapping the landscape of ethical considerations in explainable AI research
14
Zitationen
3
Autoren
2024
Jahr
Abstract
Abstract With its potential to contribute to the ethical governance of AI, eXplainable AI (XAI) research frequently asserts its relevance to ethical considerations. Yet, the substantiation of these claims with rigorous ethical analysis and reflection remains largely unexamined. This contribution endeavors to scrutinize the relationship between XAI and ethical considerations. By systematically reviewing research papers mentioning ethical terms in XAI frameworks and tools, we investigate the extent and depth of ethical discussions in scholarly research. We observe a limited and often superficial engagement with ethical theories, with a tendency to acknowledge the importance of ethics, yet treating it as a monolithic and not contextualized concept. Our findings suggest a pressing need for a more nuanced and comprehensive integration of ethics in XAI research and practice. To support this, we propose to critically reconsider transparency and explainability in regards to ethical considerations during XAI systems design while accounting for ethical complexity in practice. As future research directions, we point to the promotion of interdisciplinary collaborations and education, also for underrepresented ethical perspectives. Such ethical grounding can guide the design of ethically robust XAI systems, aligning technical advancements with ethical considerations.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.299 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.236 Zit.
"Why Should I Trust You?"
2016 · 14.198 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.098 Zit.