Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Statistical disclosure controls for machine learning models
2
Zitationen
4
Autoren
2021
Jahr
Abstract
Artificial Intelligence (AI) models are trained on large datasets. Where the training data is sensitive, the data holders need to consider risks posed by access to the training data and risks posed by the models that are released. The first problem can be considered solved: there are multiple tested solutions delivering secure access to sensitive data for research purposes. These include robust 'statistical disclosure control' (SDC) procedures for checking the confidentiality risk in outputs released from the secure environment. However, these SDC procedures are designed for statistical outputs. It is not clear how they relate to AI model specification created within the secure environment. Similarly, there is a small but growing literature on re-identification and other risks from AI models trained on personal data. However, this does not consider the operational circumstances which might limit opportunities for misuse. We bring these two fields together to consider • Is there any conceptual risk from releasing AI model specifications from a controlled environment? • If so, is there any practical risk? • If so, are there effective controls to minimise that practical risk without excessive cost or damage to the data/models? We show that there is certainly a theoretical risk, which also seems to have practical validity. There exist both statistical/technical controls to reduce risk, as well as operational controls which might be relevant for restricted environments. However, there remains a very large degree of uncertainty, including such fundamental questions as what exactly is 'disclosive' in ML models.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.284 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.233 Zit.
"Why Should I Trust You?"
2016 · 14.179 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.096 Zit.