Abstract
AbstractThe development of deep learning has led to a dramatic increase in the number of applications of artificial intelligence. However, the training of deeper neural networks for stable and accurate models translates into artificial neural networks (ANNs) that become unmanageable as the number of features increases. This work extends our earlier study where we explored the acceleration effects obtained by enforcing, in turn, scale freeness, small worldness, and sparsity during the ANN training process. The efficiency of that approach was confirmed by recent studies (conducted independently) where a million-node ANN was trained on non-specialized laptops. Encouraged by those results, our study is now focused on some tunable parameters, to pursue a further acceleration effect. We show that, although optimal parameter tuning is unfeasible, due to the high non-linearity of ANN problems, we can actually come up with a set of useful guidelines that lead to speed-ups in practical cases. We find that significant reductions in execution time can generally be achieved by setting the revised fraction parameter ($$\zeta $$
ζ
) to relatively low values.
Publisher
Springer Science and Business Media LLC
Subject
Geometry and Topology,Theoretical Computer Science,Software
Reference32 articles.
1. Barabási A-L, Pósfai M (2016) Network science. Cambridge University Press, Cambridge UK
2. Bellec G, Kappel D, Maass W, Legenstein R (2018) Deep rewiring: training very sparse deep networks. arXiv preprint arXiv:1711.05136
3. Berman DS, Buczak AL, Chavis JS, Corbett CL (2019) A survey of deep learning methods for cyber security. Information 4:122. https://doi.org/10.3390/info10040122
4. Bourely A, Boueri JP, Choromonski K (2017) Sparse neural networks topologies. arXiv preprint arXiv:1706.05683
5. Cai D, He X, Han J, Huang TS (2011) Graph regularized non-negative matrix factorization for data representation. PAMI 33(8):1548–1560
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献