Abstract
<abstract><p>Due to their high bias in favor of the majority class, traditional machine learning classifiers face a great challenge when there is a class imbalance in biological data. More recently, generative adversarial networks (GANs) have been applied to imbalanced data classification. For GANs, the distribution of the minority class data fed into discriminator is unknown. The input to the generator is random noise ($ z $) drawn from a standard normal distribution $ N(0, 1) $. This method inevitably increases the training difficulty of the network and reduces the quality of the data generated. In order to solve this problem, we proposed a new oversampling algorithm by combining the Bootstrap method and the Wasserstein GAN Network (BM-WGAN). In our approach, the input to the generator network is the data ($ z $) drawn from the distribution of minority class estimated by the BM. The generator was used to synthesize minority class data when the network training is completed. Through the above steps, the generator model can learn the useful features from the minority class and generate realistic-looking minority class samples. The experimental results indicate that BM-WGAN improves the classification performance greatly compared to other oversampling algorithms. The BM-WGAN implementation is available at: <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://github.com/ithbjgit1/BMWGAN.git">https://github.com/ithbjgit1/BMWGAN.git</ext-link>.</p></abstract>
Publisher
American Institute of Mathematical Sciences (AIMS)
Reference40 articles.
1. N. V. Chawla, Data mining for imbalanced datasets: An overview, in Data mining and knowledge discovery handbook, Springer, (2010), 875–886. https://doi.org/10.1007/978-0-387-09823-4_45
2. X. Gao, Z. Chen, S. Tang, Y. Zhang, J. Li, Adaptive weighted imbalance learning with application to abnormal activity recognition, Neurocomputing, 173 (2016), 1927–1935. https://doi.org/10.1016/j.neucom.2015.09.064
3. J. Jurgovsky, M. Granitzer, K. Ziegler, S. Calabretto, P. E. Portier, L. He-Guelton, et al., Sequence classification for credit-card fraud detection, Expert Syst. Appl., 100 (2018), 234–245. https://doi.org/10.1016/j.eswa.2018.01.037
4. N. V. Chawla, K. W. Bowyer, L. O. Hall, W. P. Kegelmeyer, SMOTE: Synthetic minority over-sampling technique, J. Artif. Intell. Res., 16 (2002), 321–357. https://doi.org/10.1613/jair.953
5. H. Han, W. Y. Wang, B. H. Mao, Borderline-SMOTE: A new over-sampling method in imbalanced data sets learning, in Advances in Intelligent Computing, Springer, (2005), 878–887. https://doi.org/10.1007/11538059_91