Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Navigating Privacy Risks in Generative AI: Concerns, Challenges, and Potential Solutions
2
Zitationen
1
Autoren
2026
Jahr
Abstract
The rapid advancement of Generative Artificial Intelligence (GenAI) and Large Language Models (LLMs) has revolutionized numerous applications across healthcare, finance, and customer service. However, these technological breakthroughs introduce significant privacy risks as models may inadvertently memorize and expose sensitive information from their training data. This paper provides a comprehensive analysis of current privacy vulnerabilities in GenAI systems, including membership inference attacks, model inversion attacks, data extraction techniques, and data poisoning vulnerabilities. We examine state-of-the-art mitigation strategies including differential privacy (DP), cryptographic methods, anonymization techniques, and perturbation strategies. Through analysis of real-world case studies and empirical evidence, we demonstrate that current privacy-preserving techniques, while promising, face significant utility-privacy trade-offs. Our findings indicate that ε-differential privacy with ε = 5, δ = 10^-6 provides adequate protection for most practical applications, though stronger guarantees may be necessary for highly sensitive data. We conclude by presenting a comprehensive framework for user-centric privacy design and identifying critical areas for future research in privacy-preserving generative AI.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.389 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.864 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.590 Zit.
Deep Learning with Differential Privacy
2016 · 5.571 Zit.
Large-Scale Machine Learning with Stochastic Gradient Descent
2010 · 5.558 Zit.