Author:
Yuen Brosnan,Hoang Minh Tu,Dong Xiaodai,Lu Tao
Abstract
AbstractThis article proposes a universal activation function (UAF) that achieves near optimal performance in quantification, classification, and reinforcement learning (RL) problems. For any given problem, the gradient descent algorithms are able to evolve the UAF to a suitable activation function by tuning the UAF’s parameters. For the CIFAR-10 classification using the VGG-8 neural network, the UAF converges to the Mish like activation function, which has near optimal performance $$F_{1}=0.902\pm 0.004$$
F
1
=
0.902
±
0.004
when compared to other activation functions. In the graph convolutional neural network on the CORA dataset, the UAF evolves to the identity function and obtains $$F_1=0.835\pm 0.008$$
F
1
=
0.835
±
0.008
. For the quantification of simulated 9-gas mixtures in 30 dB signal-to-noise ratio (SNR) environments, the UAF converges to the identity function, which has near optimal root mean square error of $$0.489\pm 0.003~\mu {\mathrm{M}}$$
0.489
±
0.003
μ
M
. In the ZINC molecular solubility quantification using graph neural networks, the UAF morphs to a LeakyReLU/Sigmoid hybrid and achieves RMSE=$$0.47\pm 0.04$$
0.47
±
0.04
. For the BipedalWalker-v2 RL dataset, the UAF achieves the 250 reward in $${961\pm 193}$$
961
±
193
epochs with a brand new activation function, which gives the fastest convergence rate among the activation functions.
Funder
Natural Sciences and Engineering Research Council of Canada
Defense Threat Reduction Agency
Publisher
Springer Science and Business Media LLC
Reference41 articles.
1. He, X., Zhao, K. & Chu, X. AutoML: A survey of the state-of-the-art. arXiv:1908.00709 (2019).
2. Floreano, D., Dürr, P. & Mattiussi, C. Neuroevolution: From architectures to learning. Evol. Intell. 1(1), 47–62 (2008).
3. Yao, Q. et al. Taking human out of learning applications: A survey on automated machine learning. arXiv:1810.13306 (2018).
4. Stanley, K. O. & Miikkulainen, R. Evolving neural networks through augmenting topologies. Evol. Comput. 10(2), 99–127 (2002).
5. Stanley, K. O., D’Ambrosio, D. B. & Gauci, J. A hypercube-based encoding for evolving large-scale neural networks. Artif. Life 15(2), 185–212 (2009).
Cited by
37 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献