Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable AI: A Diverse Stakeholder Perspective
3
Zitationen
2
Autoren
2024
Jahr
Abstract
Artificial Intelligence (AI) is increasingly integral for doing classification and prediction tasks across various fields, including healthcare, legal systems, autonomous vehicles, and financial services [1]. As such, stakeholders such as system developers, system operators, end-users necessitate varying levels of explanations for the decisions proposed by these AI systems to enhance their trust and reliability in these systems, and use these systems in practice. The growing reliance on AI as a decision-support tool in these critical areas underscores the need for AI systems to be explainable development process and architecture, comprehensible to their users, ensuring their use is safe, responsible, and in compliance with legal standards.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.299 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.236 Zit.
"Why Should I Trust You?"
2016 · 14.198 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.098 Zit.