Author:
Frenkel Charlotte,Lefebvre Martin,Bol David
Abstract
While the backpropagation of error algorithm enables deep neural network training, it implies (i) bidirectional synaptic weight transport and (ii) update locking until the forward and backward passes are completed. Not only do these constraints preclude biological plausibility, but they also hinder the development of low-cost adaptive smart sensors at the edge, as they severely constrain memory accesses and entail buffering overhead. In this work, we show that the one-hot-encoded labels provided in supervised classification problems, denoted as targets, can be viewed as a proxy for the error sign. Therefore, their fixed random projections enable a layerwise feedforward training of the hidden layers, thus solving the weight transport and update locking problems while relaxing the computational and memory requirements. Based on these observations, we propose the direct random target projection (DRTP) algorithm and demonstrate that it provides a tradeoff between accuracy and computational cost that is suitable for adaptive edge computing devices.
Funder
Fonds De La Recherche Scientifique - FNRS
Reference50 articles.
1. “Deep speech 2: end-to-end speech recognition in English and Mandarin,”;Amodei,2016
2. Learning in the machine: random backpropagation and the deep learning channel;Baldi;Artif. Intell,2018
3. “Assessing the scalability of biologically-motivated deep learning algorithms and architectures,”;Bartunov,2018
4. Small-world brain networks;Bassett;Neuroscientist,2006
5. Decoupled greedy learning of CNNs;Belilovsky,2019
Cited by
36 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献