OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 01.04.2026, 17:19

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Differential privacy enables fair and accurate AI-based analysis of speech disorders while protecting patient data

2025·1 Zitationen·npj Artificial IntelligenceOpen Access
Volltext beim Verlag öffnen

1

Zitationen

9

Autoren

2025

Jahr

Abstract

Abstract Deep learning models hold promise for analyzing speech disorders, but their reliance on sensitive data raises privacy concerns. While differential privacy(DP) has been applied in medical imaging, its use in pathological speech remains underexplored. This study investigates DP impacts on speech-based diagnosis, focusing on trade-offs between privacy, accuracy, and fairness. Using a real-world dataset of 200 hours from 2839 German-speaking participants, we observed maximum accuracy reduction of 3.85% when training with DP with high privacy levels. We also demonstrated vulnerability of non-private models to gradient inversion attacks and DP’s success in preventing them. To explore potential generalizability across languages and disorders, we applied our method to a Spanish Parkinson’s dataset, showing task-specific pretraining can mitigate performance losses. Fairness analysis revealed minimal gender bias but highlighted age-related disparities. Our results suggest DP can preserve diagnostic utility in pathological speech while addressing privacy and fairness, supporting broader deployment in privacy-sensitive clinical applications.

Ähnliche Arbeiten