Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Operationalizing Explainable Artificial Intelligence in the European Union Regulatory Ecosystem
12
Zitationen
5
Autoren
2024
Jahr
Abstract
The European Union’s regulatory ecosystem presents challenges balancing legal and sociotechnical drivers for explainable AI systems. Core tensions emerge on dimensions of oversight, user needs and litigation. This paper maps provisions on algorithmic transparency and explainability across major EU data, AI, and platform policies using qualitative analysis. We characterize involved stakeholders and organizational implementation targets. Constraints become visible between useful transparency for accountability versus confidentiality protections. Through an AI hiring system example, we explore complications operationalizing explainability. Customization is required satisfying explainability desires within confidentiality and proportionality bounds. Findings advise technologists on prudent eXplainable AI technique selection given multi-dimensional tensions. Outcomes recommend policymakers balance worthy transparency goals with cohesive legislation enabling equitable dispute resolution.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.299 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.236 Zit.
"Why Should I Trust You?"
2016 · 14.198 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.098 Zit.