BACKGROUND
Recent advancements in Generative Adversarial Networks (GANs) and sophisticated language models have significantly impacted the synthesis and augmentation of medical data. These technologies facilitate the creation of high-quality, realistic datasets essential for enhancing machine learning (ML) applications in healthcare. GANs, through their adversarial framework, and Large Language Models (LLM), with their advanced Natural Language Processing (NLP) capabilities, offer innovative solutions for generating synthetic data that mirrors real-world medical information. This is particularly valuable in scenarios constrained by data privacy and availability. However, challenges persist in accurately capturing complex associations within medical datasets. Misrepresentation of these can lead to synthetic data that poorly reflects the real-world data variability and relationships, impacting model performance in clinical applications.
OBJECTIVE
This study aims to evaluate the effectiveness of Synthetic Data Generation (SDG) methods in replicating the correlation structures of real medical data and assess their performance in downstream tasks using Random Forests (RF). We compare two SDG approaches, CTGAN and the Tabula-Framework, with a focus on their ability to maintain accurate data correlations and their implications for model accuracy and variable importance.
METHODS
We assess synthetic data generation methods using real-world and simulated datasets. Simulated data involve ten Gaussian variables with different correlation structures, generated via Cholesky decomposition to create binary target variables. Real-world datasets include Body Performance (BP) with 13,393 samples for fitness classification, Wisconsin Breast Cancer (BC) with 569 samples for tumor diagnosis, and Diabetes (DB) with 768 samples for diabetes prediction. Data quality is evaluated through the Euclidean Distance (L² norm) between original and synthetic correlation matrices and through downstream classification tasks using Random Forests (RF) and computing F₁ scores. Variable importance (VIMP) measures, i.e., Gini impurity and permutation-based methods, are employed for assessing the mechanism behind the RF results. For each model and epoch combination, 100 samples are drawn, conducting outlier analysis to ensure robust performance evaluation.
RESULTS
In smaller datasets (samples = 1000), synthetic data utility remains stable under both high and moderate correlations, with moderate correlations occasionally enhancing utility. However, as correlation complexity increases, particularly with stronger correlations across multiple features, models struggle, reflected by higher L^2 values for the correlation matrix distance. CTGAN improves with more training epochs but requires significant tuning to handle complex patterns, while LLMs show promise with larger datasets despite their computational demands. Real-world data mirrors these findings, with LLMs outperforming in scenarios with intricate dependency structures. VIMP score analysis underscores the importance of aligning model complexity with data correlation structures.
CONCLUSIONS
Our findings emphasize that correlation complexity, not strength, is the key challenge in synthetic data generation. While CTGAN and LLMs show varying success based on dataset size and complexity, careful tuning and model selection are essential. Further research should focus on optimizing training protocols, exploring simpler neural network architectures, and expanding simulations to better handle nonlinear and high-order interactions in complex datasets.