OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 27.03.2026, 06:06

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Explainable Artificial Intelligence: Frameworks for Ensuring the Trustworthiness

2024·2 Zitationen
Volltext beim Verlag öffnen

2

Zitationen

3

Autoren

2024

Jahr

Abstract

The growing computer power and ubiquity of big data are allowing Artificial Intelligence (AI) to gain widespread adoption and applicability in a wide range of sectors. The absence of an explanation for the conclusions made by today’s AI algorithms is a significant disadvantage in crucial decision-making systems. For example, existing black-box AI systems are vulnerable to bias and adversarial assaults, which can taint the learning and inference processes. Explainable AI (XAI) is a recent trend in AI algorithms that gives explanations for their AI conclusions. Many contemporary AI systems have been shown to be vulnerable to undetectable assaults, biased against underrepresented groups, and deficient in user privacy protection. These flaws damage the user experience and undermine people’s faith in all AI systems. This study proposes a systematic way to tie the social science notions of trust to the technology employed in AI-based services and products.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and EducationAdversarial Robustness in Machine Learning
Volltext beim Verlag öffnen