OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 19.03.2026, 23:51

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Explainable AI (XAI) for Neonatal Pain Assessment via Influence Function Modification

2024·0 ZitationenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2024

Jahr

Abstract

As machine learning increasingly plays a crucial role in various medical applications, the need for improved explainability of these complex, often opaque models becomes more urgent. Influence functions have emerged as a critical method for explaining these black-box models. Influence functions measure how much each training sample affects a model's predictions on test data, with previous research indicating that the most influential training samples usually exhibit a high degree of semantic similarity to the test point. Building on this concept, we propose a novel approach that modifies the influence function for more precise influence estimations. This involves adding a new weighting factor to the influence function based on the similarity of the test and training data. We employ cosine similarity, Euclidean distance, and the structural similarity index to calculate this weight. The modified influence method is evaluated on a neonatal pain assessment model to explain the decision, revealing excellent performance in identifying influential training points compared to the baseline method. These results show the effectiveness of the proposed approach in elucidating the decision-making process.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

COVID-19 diagnosis using AIArtificial Intelligence in Healthcare and EducationMachine Learning in Healthcare
Volltext beim Verlag öffnen