OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 19.03.2026, 22:56

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Deep learning models and the limits of explainable artificial intelligence

2025·2 Zitationen·Asian Journal of PhilosophyOpen Access
Volltext beim Verlag öffnen

2

Zitationen

3

Autoren

2025

Jahr

Abstract

Abstract It has often been argued that we face a trade-off between accuracy and opacity in deep learning models. The idea is that we can only harness the accuracy of deep learning models by simultaneously accepting that the grounds for the models’ decision-making are epistemically opaque to us. In this paper, we ask the following question: what are the prospects of making deep learning models transparent without compromising on their accuracy? We argue that the answer to this question depends on which kind of opacity we have in mind. If we focus on the standard notion of opacity, which tracks the internal complexities of deep learning models, we argue that existing explainable AI (XAI) techniques show us that the prospects look relatively good. But, as it has recently been argued in the literature, there is another notion of opacity that concerns factors external to the model. We argue that there are at least two types of external opacity—link opacity and structure opacity—and that existing XAI techniques can to some extent help us reduce the former but not the latter.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and EducationMachine Learning in Healthcare
Volltext beim Verlag öffnen