Author:
Khan Vijitha,Parameshwaran R,Arulkumaran G,Gopi B
Abstract
Abstract
Efficient machine learning techniques that need substantial equipment and power usage in its computation phase are computational models. Stochastic computation has indeed been added and the solution a compromise between this ability of the project and information systems and organisations to introduce computational models. Technical specifications and energy cost are greatly diminished in Stochastic Computing by marginally compromising the precision of inference and calculation pace. However, Sc Neural Network models’ efficiency has also been greatly enhanced with recent advances in SC technologies, making it equivalent to standard relational structures and fewer equipment types. Developers start with both the layout of a rudimentary SC nerve cell throughout this essay and instead study different kinds of SC machine learning, including word embedding, reinforcement learning, convolutionary genetic algorithms, and reinforcement learning.
Consequently, rapid developments in SC architectures that further enhance machine learning’s device speed and reliability are addressed. Both for practice and prediction methods, the generalised statement and simplicity of SC Machine Learning are demonstrated. After this, concerning conditional alternatives, the strengths and drawbacks of SC Machine learning are addressed.
Subject
General Physics and Astronomy
Reference15 articles.
1. Collaborative Intelligence: Accelerating Deep Neural Network Inference via Device-Edge Synergy;Shan,2020
2. Implementation of a CNN accelerator on an Embedded SoC Platform using SDSoC;Park,2018
3. Guardians of the Deep Fog: Failure-Resilient DNN Inference from Edge to Cloud;Yousefpour,2019
4. Fully-parallel Convolutional Neural Network Hardware;Frasser,2020
5. Hardware-Efficient Structure of the Accelerating Module for Implementation of Convolutional Neural Network Basic Operation;Cariow,2018
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献