Affiliation:
1. Institute for Future, School of Automation, Qingdao University, Qingdao, Shandong 266000, P. R. China
2. School of Economics and Management, Inner Mongolia University of Science and Technology, Baotou, Inner Mongolia 014010, P. R. China
Abstract
Deep neural network has made surprising achievements in natural language processing, image pattern classification recognition, and other domains in the last few years. It is still tough to apply to hardware-constrained or mobile equipment because of the huge number of parameters, high storage as well as computing costs. In this paper, a new sparse iteration neural network architecture is proposed. First, the pruning method is used to compress the model size and make the network sparse. Then the architecture is iterated on the sparse network model, and the network performance is improved without adding additional parameters. Finally, the hybrid deep learning model was carried out on CV tasks and NLP tasks on ANN, CNN, and Transformer. Compared with the sparse network architecture, we finally found that the accuracy of the MINST, CIFAR10, PASCAL VOC 2012, and SQuAD datasets is improved by 0.47%, 0.64%, 3.75%, and 15.06%, respectively.
Funder
Research on the innovation and development of modern educational technology and discipline integration under the guidance of majors and driven by projects
Publisher
World Scientific Pub Co Pte Ltd
Subject
Electrical and Electronic Engineering,Hardware and Architecture,Electrical and Electronic Engineering,Hardware and Architecture
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献