Author:
Böttcher Lucas,Wheeler Gregory
Abstract
Abstract
Analyzing the geometric properties of high-dimensional loss functions, such as local curvature and the existence of other optima around a certain point in loss space, can help provide a better understanding of the interplay between neural-network structure, implementation attributes, and learning performance. In this paper, we combine concepts from high-dimensional probability and differential geometry to study how curvature properties in lower-dimensional loss representations depend on those in the original loss space. We show that saddle points in the original space are rarely correctly identified as such in the expected lower-dimensional representations if random projections are used. The principal curvature in the expected lower-dimensional representation is proportional to the mean curvature in the original loss space. Hence, the mean curvature in the original loss space determines if saddle points appear, on average, as either minima, maxima, or almost flat regions. We use the connection between expected curvature in random projections and mean curvature in the original space (i.e. the normalized Hessian trace) to compute Hutchinson-type trace estimates without calculating Hessian-vector products as in the original Hutchinson method. Because random projections are not suitable for correctly identifying saddle information, we propose to study projections along the dominant Hessian directions that are associated with the largest and smallest principal curvatures. We connect our findings to the ongoing debate on loss landscape flatness and generalizability. Finally, for different common image classifiers and a function approximator, we show and compare random and Hessian projections of loss landscapes with up to approximately
7
×
10
6
parameters.
Reference65 articles.
1. Global minima of overparameterized neural networks;Cooper;SIAM J. Math. Data Sci.,2021
2. Multilayer feedforward networks are universal approximators;Hornik;Neural Netw.,1989
3. Approximation capabilities of multilayer feedforward networks;Hornik;Neural Netw.,1991
4. Minimum width for universal approximation;Park,2021
5. Train faster, generalize better: stability of stochastic gradient descent;Hardt,2016
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献