Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable Artificial Intelligence: Frameworks for Ensuring the Trustworthiness
2
Zitationen
3
Autoren
2024
Jahr
Abstract
The growing computer power and ubiquity of big data are allowing Artificial Intelligence (AI) to gain widespread adoption and applicability in a wide range of sectors. The absence of an explanation for the conclusions made by today’s AI algorithms is a significant disadvantage in crucial decision-making systems. For example, existing black-box AI systems are vulnerable to bias and adversarial assaults, which can taint the learning and inference processes. Explainable AI (XAI) is a recent trend in AI algorithms that gives explanations for their AI conclusions. Many contemporary AI systems have been shown to be vulnerable to undetectable assaults, biased against underrepresented groups, and deficient in user privacy protection. These flaws damage the user experience and undermine people’s faith in all AI systems. This study proposes a systematic way to tie the social science notions of trust to the technology employed in AI-based services and products.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.408 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.253 Zit.
"Why Should I Trust You?"
2016 · 14.286 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.132 Zit.