Affiliation:
1. Centre for Signal Processing, School of Electrical & Electronic Eng., Nanyang Technological University, Singapore
Abstract
Training set parallelism and network based parallelism are two popular paradigms for parallelizing a feedforward (artificial) neural network. Training set parallelism is particularly suited to feedforward neural networks with backpropagation learning where the size of the training set is large in relation to the size of the network. This paper analyzes training set parallelism for feedforward neural networks when implemented on a transputer array configured in a pipelined ring topology. Theoretical expressions for the time per epoch (iteration) and optimal size of a processor network are derived when the training set is equally distributed among the processing nodes. These show that the speed up is a function of the number of patterns per processor, communication overhead per epoch and the total number of processors in the topology. Further analysis of how to optimally distribute the training set on a given processor network when the number of patterns in the training set is not an integer multiple of the number of processors, is also carried out. It is shown that optimal allocation of patterns in such cases is a mixed integer programming problem. Using this analysis it is found that equal distribution of training patterns among the processors is not the optimal way to allocate the patterns even when the training set is an integer multiple of the number of processors. Extension of the analysis to processor networks comprising processors of different speeds is also carried out. Experimental results from a T805 transputer array are presented to verify all the theoretical results.
Publisher
World Scientific Pub Co Pte Lt
Subject
Computer Networks and Communications,General Medicine