OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 25.04.2026, 08:54

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Pseudo datasets estimate feature attribution in artificial neural networks

2025·0 Zitationen·Scientific ReportsOpen Access
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2025

Jahr

Abstract

Neural networks demonstrate exceptional predictive performance across diverse classification tasks. However, their lack of interpretability restricts their widespread application. Consequently, in recent years, numerous researchers have focused on model explanation techniques to elucidate the internal mechanisms of these 'black box' models. Yet, prevailing explanation methods predominantly focus on elucidating individual features, thereby overlooking synergistic effects and interactions among multiple features, potentially hindering a comprehensive understanding of the model's predictive behavior. Therefore, this study proposes a two-stage explanation method, known as Pseudo Datasets Perturbation Effect (PDPE). The fundamental concept is to discern feature importance by perturbing the data and observing its influence on prediction outcomes. Under structured data, this method identifies potential feature interactions while evaluating the relative significance of individual features and their interaction terms. Compared with the widely recognized SHAP Value method, our computer simulation studies within the context of neural networks approximating the linear association of logistic regression demonstrate that PDPE provides faster, more accurate explanations. PDPE helps users understand the significance of individual features and their interactions for model predictions. Additionally, using real-life data from the National Institute of Diabetes and Digestive and Kidney Diseases, the analysis results also show the superior performance of the new approach.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and EducationMachine Learning in Healthcare
Volltext beim Verlag öffnen