Abstract
AbstractWe study the natural gradient method for learning in deep Bayesian networks, including neural networks. There are two natural geometries associated with such learning systems consisting of visible and hidden units. One geometry is related to the full system, the other one to the visible sub-system. These two geometries imply different natural gradients. In a first step, we demonstrate a great simplification of the natural gradient with respect to the first geometry, due to locality properties of the Fisher information matrix. This simplification does not directly translate to a corresponding simplification with respect to the second geometry. We develop the theory for studying the relation between the two versions of the natural gradient and outline a method for the simplification of the natural gradient with respect to the second geometry based on the first one. This method suggests to incorporate a recognition model as an auxiliary model for the efficient application of the natural gradient method in deep networks.
Funder
Max Planck Institute for Mathematics in the Sciences
Publisher
Springer Science and Business Media LLC
Reference32 articles.
1. Amari, Shun-ichi: Information geometry of the $$\text{ EM }$$ and $$\text{ em }$$ algorithms for neural networks. Neural Netw. 8(9), 1379–1408 (1995)
2. Amari, S.: Natural gradient works efficiently in learning. Neural Comput. 10(2), 251–276 (1998)
3. Amari, S.: Information Geometry and Its Applications, vol. 194. Springer, New York (2016)
4. Amari, S., Nagaoka, H.: Methods of Information Geometry. Oxford University Press, Oxford (2000)
5. Ay, N.: Locality of global stochastic interaction in directed acyclic networks. Neural Comput. 14(12), 2959–2980 (2002)
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献