BACKGROUND
Electronic Health Record (EHR) is a valuable source of patient information that must be properly de-identified before it can be shared with researchers, which requires expertise and time. On the other hand, synthetic data has considerably reduced the restrictions on the use and sharing of real data, allowing researchers to access it more rapidly with far fewer privacy constraints. There has been a growing interest in establishing a method to generate synthetic data that protects patients' privacy while properly reflecting the data.
OBJECTIVE
The goal of this paper is to develop a model that generates valuable synthetic longitudinal health data while protecting the privacy of the patients whose data is collected.
METHODS
In this paper, we investigate the best model for generating synthetic health data, with a focus on longitudinal observations. We develop a generative model that relies on the generalized canonical polyadic (GCP) tensor decomposition. This model also involves sampling from a latent factor matrix that contains patient information using sequential decision trees, Coupla, and Hamiltonian Monte Carlo methods. The model is applied on samples from the MIMIC-III dataset. Numerous analyses and experiments were conducted in order to develop a method that would provide optimal results.
RESULTS
In certain experiments, all simulation methods used in the model produced the same high level of performance. Our proposed model is capable of addressing the challenge of sampling patients from electronic health records. This means that we can simulate a variety of patients in the synthetic dataset, which may differ in number from the patients in the original data. The analysis and research findings have revealed that our model is a promising method for generating longitudinal health data.
CONCLUSIONS
We have presented a generative model for producing synthetic longitudinal health data. The model is formulated by applying the generalized CP decomposition. We have provided three approaches for the synthesis and simulation of a latent factor matrix, following the process of factorization. In short, we have reduced the challenge of synthesizing massive longitudinal health data to synthesizing a non-longitudinal and significantly smaller dataset.