Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Differential privacy enables fair and accurate AI-based analysis of speech disorders while protecting patient data
1
Zitationen
9
Autoren
2025
Jahr
Abstract
Abstract Deep learning models hold promise for analyzing speech disorders, but their reliance on sensitive data raises privacy concerns. While differential privacy(DP) has been applied in medical imaging, its use in pathological speech remains underexplored. This study investigates DP impacts on speech-based diagnosis, focusing on trade-offs between privacy, accuracy, and fairness. Using a real-world dataset of 200 hours from 2839 German-speaking participants, we observed maximum accuracy reduction of 3.85% when training with DP with high privacy levels. We also demonstrated vulnerability of non-private models to gradient inversion attacks and DP’s success in preventing them. To explore potential generalizability across languages and disorders, we applied our method to a Spanish Parkinson’s dataset, showing task-specific pretraining can mitigate performance losses. Fairness analysis revealed minimal gender bias but highlighted age-related disparities. Our results suggest DP can preserve diagnostic utility in pathological speech while addressing privacy and fairness, supporting broader deployment in privacy-sensitive clinical applications.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.349 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.219 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.631 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.480 Zit.