Abstract
Data leakage during machine learning (ML) preprocessing is a critical issue where unintended external information skews the training process, resulting in artificially high-performance metrics and undermining model reliability. This study addresses the insufficient exploration of data leakage across diverse ML domains, highlighting the necessity of comprehensive investigations to ensure robust and dependable ML models in real-world applications. Significant discrepancies in model performance due to data leakage were observed, with notable variations in F1 scores and ROC AUC values for the Breast Cancer dataset. The Tic-Tac-Toe Endgame dataset analysis revealed the varying impact on models like Ridge, SGD, GaussianNB, and MLP, underscoring the profound effect of data leakage. The German Credit Scoring dataset showed slight enhancements in recall and F1 scores for models like DT and GB without data leakage, indicating reduced overfitting. Additionally, models such as PassiveAggressive, Ridge, SGD, GaussianNB, and Nearest Centroid exhibited shifts in performance metrics, highlighting the intricate response to data leakage. The study also revealed raw data leakage rates, such as 6.79% for Spambase and 1.99% for Breast Cancer. These findings emphasize meticulous data management and validation to mitigate leakage effects, which is crucial for developing reliable ML models.