OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.03.2026, 06:54

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Exploring Explainability in Federated Learning: A Comparative Study on Brain Age Prediction

2025·0 Zitationen·Communications in computer and information scienceOpen Access
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2025

Jahr

Abstract

Abstract Predicting brain age from neuroimaging data is increasingly used to study aging trajectories and detect deviations linked to neurological conditions. Machine learning models trained on large datasets have shown promising results, but data privacy regulations and the challenge of sharing medical data across institutions limit the feasibility of centralized training. Federated Learning (FL) offers a solution by allowing multiple sites to collaboratively train a model without sharing raw data. However, it remains unclear how FL affects the explainability of these models, raising concerns about the consistency and reliability of their predictions. In this study, we analyze the consistency of model explanations between centralized and federated training paradigms. Using DeepSHAP we compare feature attributions in brain age prediction models trained on the multi-site, publicly available OpenBHB dataset. We examine the impact of how data is distributed across sites (IID vs. non-IID), the number of sites participating per training round (sampling rate), and different FL aggregation methods (FedAVG, FedProx). Our findings show that federated models provide different explanations compared to centralized models, even when trained on the same data and task. Non-IID data distributions reduce the consistency of explanations, while including a larger number of sites per training round improves stability. Interestingly, some federated models trained on non-IID data capture biologically meaningful patterns of brain aging even more effectively than centralized models. These results suggest that careful choices in how data is distributed and how training is conducted in FL can impact model accuracy and interpretability.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Machine Learning in HealthcarePrivacy-Preserving Technologies in DataArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen