Author:
Petersen Philipp,Raslan Mones,Voigtlaender Felix
Abstract
AbstractWe analyze the topological properties of the set of functions that can be implemented by neural networks of a fixed size. Surprisingly, this set has many undesirable properties. It is highly non-convex, except possibly for a few exotic activation functions. Moreover, the set is not closed with respect to $$L^p$$
L
p
-norms, $$0< p < \infty $$
0
<
p
<
∞
, for all practically used activation functions, and also not closed with respect to the $$L^\infty $$
L
∞
-norm for all practically used activation functions except for the ReLU and the parametric ReLU. Finally, the function that maps a family of weights to the function computed by the associated network is not inverse stable for every practically used activation function. In other words, if $$f_1, f_2$$
f
1
,
f
2
are two functions realized by neural networks and if $$f_1, f_2$$
f
1
,
f
2
are close in the sense that $$\Vert f_1 - f_2\Vert _{L^\infty } \le \varepsilon $$
‖
f
1
-
f
2
‖
L
∞
≤
ε
for $$\varepsilon > 0$$
ε
>
0
, it is, regardless of the size of $$\varepsilon $$
ε
, usually not possible to find weights $$w_1, w_2$$
w
1
,
w
2
close together such that each $$f_i$$
f
i
is realized by a neural network with weights $$w_i$$
w
i
. Overall, our findings identify potential causes for issues in the training procedure of deep learning such as no guaranteed convergence, explosion of parameters, and slow convergence.
Publisher
Springer Science and Business Media LLC
Subject
Applied Mathematics,Computational Theory and Mathematics,Computational Mathematics,Analysis
Reference73 articles.
1. Z. Allen-Zhu, Y. Li, and Z. Song, A Convergence Theory for Deep Learning via Over-Parameterization, Proceedings of the 36th International Conference on Machine Learning, 2019, pp. 242–252.
2. H. Amann and J. Escher, Analysis III, Birkhäuser Verlag, Basel, 2009.
3. P. M. Anselone and J. Korevaar, Translation Invariant Subspaces of Finite Dimension, Proc. Amer. Math. Soc. 15 (1964), 747–752.
4. M. Anthony and P. L. Bartlett, Neural Network Learning: Theoretical Foundations, Cambridge University Press, Cambridge, 1999.
5. F. Bach, Breaking the Curse of Dimensionality with Convex Neural Networks, J. Mach. Learn. Res. 18 (2017), no. 1, 629–681.
Cited by
28 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献