Abstract
Two of the most well-liked neural network frameworks, Theano and TensorFlow, will be compared in this study for how well they perform on a given problem. The MNIST database will be used for this specific problem, which is the recognition of handwritten digits from one to nine. It is a good idea to use more examples than contrasted ones to compare these frameworks because this database is the subject of active research at the moment and has produced excellent results. However, in order to be trained and deliver accurate results, neural networks need a sizeable amount of sample data, as will be covered in more detail later. Because of this, big data experts frequently encounter problems of this nature. As the project description implies, we won't just present a standard comparison because of this; instead, we'll work to present a comparison of these networks' performance in a Big Data environment using distributed computing. The FMNIST or Fashion MNIST database and CIFAR10 will also be tested (using the same neural network design), extending the scope of the comparison beyond MNIST. The same code will be used with the same structure thanks to the use of a higher-level library called Keras, which makes use of the aforementioned support (in our case, Theano or TensorFlow). There has been a surge in open-source parallel GPU implementation research and development as a result of the high computational cost of training CNNs on large data sets. However, there aren't many studies that have been done to assess the performance traits of those implementations. In this study, we compare these implementations carefully across a wide range of parameter configurations, look into potential performance bottlenecks, and pinpoint a number of areas that could use more fine-tuning.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献