Author:
Jingliang Chen Jingliang Chen,Jingliang Chen Chenchen Wu,Chenchen Wu Shuisheng Chen,Shuisheng Chen Yi Zhu,Yi Zhu Bin Li
Abstract
<p>In the case of traditional methods such as network models and algorithms are highly open source and highly bound to hardware, data processing has become an important method to optimize the performance of neural networks. In this paper, we combine traditional data processing methods and propose a method based on the mini dataset which is strictly randomly divided within the training process; and takes the calculation results of the cross-entropy loss function as the measurement standard, by comparing the mini dataset, screening, and processing to optimize the deep neural network. Using this method, each iteration training can obtain a relatively optimal result, and the optimization effects of each time are integrated to optimize the results of each epoch. Finally, in order to verify the effectiveness and applicability of this data processing method, experiments are carried out on MNIST, HAGRID, and CIFAR-10 datasets to compare the effects of using this method and not using this method under different hyper-parameters, and finally, the effectiveness of this data processing method is verified. Finally, we summarize the advantages and limitations of this method and look forward to the future improvement direction of this method.</p>
<p> </p>
Publisher
Angle Publishing Co., Ltd.
Subject
Computer Networks and Communications,Software
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献