Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Optimizing the synthesis of clinical trial data using sequential trees
44
Zitationen
3
Autoren
2020
Jahr
Abstract
OBJECTIVE: With the growing demand for sharing clinical trial data, scalable methods to enable privacy protective access to high-utility data are needed. Data synthesis is one such method. Sequential trees are commonly used to synthesize health data. It is hypothesized that the utility of the generated data is dependent on the variable order. No assessments of the impact of variable order on synthesized clinical trial data have been performed thus far. Through simulation, we aim to evaluate the variability in the utility of synthetic clinical trial data as variable order is randomly shuffled and implement an optimization algorithm to find a good order if variability is too high. MATERIALS AND METHODS: Six oncology clinical trial datasets were evaluated in a simulation. Three utility metrics were computed comparing real and synthetic data: univariate similarity, similarity in multivariate prediction accuracy, and a distinguishability metric. Particle swarm was implemented to optimize variable order, and was compared with a curriculum learning approach to ordering variables. RESULTS: As the number of variables in a clinical trial dataset increases, there is a pattern of a marked increase in variability of data utility with order. Particle swarm with a distinguishability hinge loss ensured adequate utility across all 6 datasets. The hinge threshold was selected to avoid overfitting which can create a privacy problem. This was superior to curriculum learning in terms of utility. CONCLUSIONS: The optimization approach presented in this study gives a reliable way to synthesize high-utility clinical trial datasets.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.447 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.958 Zit.
Deep Learning with Differential Privacy
2016 · 5.740 Zit.
Federated Machine Learning
2019 · 5.714 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.610 Zit.