Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Data sharing in clinical trials – practical guidance on anonymising trial datasets
56
Zitationen
6
Autoren
2018
Jahr
Abstract
BACKGROUND: There is an increasing demand by non-commercial funders that trialists should provide access to trial data once the primary analysis is completed. This has to take into account concerns about identifying individual trial participants, and the legal and regulatory requirements. METHODS: Using the good practice guideline laid out by the work funded by the Medical Research Council Hubs for Trials Methodology Research (MRC HTMR), we anonymised a dataset from a recently completed trial. Using this example, we present practical guidance on how to anonymise a dataset, and describe rules that could be used on other trial datasets. We describe how these might differ if the trial was to be made freely available to all, or if the data could only be accessed with specific permission and data usage agreements in place. RESULTS: Following the good practice guidelines, we successfully created a controlled access model for trial data sharing. The data were assessed on a case-by-case basis classifying variables as direct, indirect and superfluous identifiers with differing methods of anonymisation assigned depending on the type of identifier. A final dataset was created and checks of the anonymised dataset were applied. Lastly, a procedure for release of the data was implemented to complete the process. CONCLUSIONS: We have implemented a practical solution to the data anonymisation process resulting in a bespoke anonymised dataset for a recently completed trial. We have gained useful learnings in terms of efficiency of the process going forward, the need to balance anonymity with data utilisation and future work that should be undertaken.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.451 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.968 Zit.
Deep Learning with Differential Privacy
2016 · 5.759 Zit.
Federated Machine Learning
2019 · 5.736 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.614 Zit.