Abstract
AbstractIn recent years, deep neural networks have evolved rapidly in engineering technology, with models becoming larger and deeper. However, for most companies, developing large models is extremely costly and highly risky. Researchers usually focus on the performance of the model, neglecting its cost and accessibility. In fact, most regular business scenarios do not require high-level AI. A simple and inexpensive modeling method for fulfilling certain demands for practical applications of AI is needed. In this paper, a Fragmented neural network method is proposed. Inspired by the random forest algorithm, both the samples and features are randomly sampled on image data. Images are randomly split into smaller pieces. Weak neural networks are trained using these fragmented images, and many weak neural networks are then ensembled to build a strong neural network by voting. In this way, sufficient accuracy is achieved while reducing the complexity and data volume of each base learner, enabling mass production through parallel and distributed computing. By conducting experiments on the MNIST and CIFAR10 datasets, we build a model pool using FNN, CNN, DenseNet, and ResNet as the basic network structure. We find that the accuracy of the ensemble weak network is significantly higher than that of each base learner. Meanwhile, the accuracy of the ensemble network is highly dependent on the performance of each base learner. The accuracy of the ensemble network is comparable to or even exceeds that of the full model and has better robustness. Unlike other similar studies, we do not pursue SOTA models. Instead, we achieved results close to the full model with a smaller number of parameters and amount of data.
Funder
Research Foundation for Youth Scholars of Beijing Technology and Business University
Publisher
Springer Science and Business Media LLC
Reference25 articles.
1. Redmon, J., Divvala, S., Girshick, R. & Farhadi, A. You only look once: Unified, real-time object detection. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 779–788. https://doi.org/10.1109/CVPR.2016.91 (IEEE, 2016).
2. Goodfellow, I. et al. Generative adversarial networks. Commun. ACM 63(11), 139–144. https://doi.org/10.1145/3422622 (2020).
3. Devlin, J., Chang, M.W., Lee, K. & Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In Conference of the North-American-Chapter of the Association-for-Computational-Linguistics - Human Language Technologies (NAACL-HLT), Minneapolis, USA (2019).
4. Brown, T.B., Mann, B., Ryder, N. et al. Language Models are Few-Shot Learners. arXiv preprint https://www.arxiv.org/abs/2005.14165v4 (2020).
5. Rokach, L. Ensemble methods for classifiers. In Data Mining and Knowledge Discovery Handbook (Maimon, O., Rokach, L. eds.). 957–980 https://doi.org/10.1007/0-387-25465-X_45 (Springer, 2005).