Abstract
AbstractBackgroundCompared to traditional supervised machine learning approaches employing fully labeled samples, positive-unlabeled (PU) learning techniques aim to classify “unlabeled” samples based on a smaller proportion of known positive examples. This more challenging modeling goal reflects many real-world scenarios in which negative examples are not available, posing direct challenges to defining prediction accuracy robustness. While several studies have evaluated predictions learned from only definitive positive examples, few have investigated whether correct classification of a high proportion of known positives (KP) samples from among unlabeled samples can act as a surrogate to indicate a performance.ResultsIn this study, we report a novel methodology combining multiple established PU learning-based strategies to evaluate the potential of KP samples to accurately classify unlabeled samples without using “ground truth” positive and negative labels for validation. To address model robustness, we report the first application of permutation test in PU learning. Multivariate synthetic datasets and real-world high-dimensional benchmark datasets were employed to validate the proposed pipeline with varied underlying ground truth class label compositions among the unlabeled set and different proportions of KP examples. Comparisons between model performance with actual and permutated labels could be used to distinguish reliable from unreliable models.ConclusionsLike in fully supervised machine learning, permutation testing offers a means to set a baseline “no-information rate” benchmark in the context of semi-supervised PU learning inference tasks against which model performance can be compared.
Publisher
Cold Spring Harbor Laboratory
Reference19 articles.
1. Köppen, M. The curse of dimensionality. in 5th online world conference on soft computing in industrial applications (WSC5). 2000.
2. Good, P. , Permutation tests: a practical guide to resampling methods for testing hypotheses. 2013: Springer Science & Business Media.
3. Ojala, M. and G.C. Garriga , Permutation tests for studying classifier performance. Journal of machine learning research, 2010. 11(6).
4. Li, F. , et al., Positive-unlabeled learning in bioinformatics and computational biology: a brief review. Brief Bioinform, 2022. 23(1).
5. A bagging SVM to learn from positive and unlabeled examples;Pattern Recognition Letters,2014