Affiliation:
1. Bu-Ali Sina University
2. Independent Researcher
3. University of Oslo
Abstract
Abstract
Large Convolutional Neural Networks (CNNs) can extract suitable features from data but increase memory and energy consumption and require a significant amount of computing resources, especially for IoT infrastructures. CNNs can be distributed across end devices, edge, and cloud, but their distribution may increase privacy risks and latency. This paper proposes to utilize only the edge (fog) and end devices to mitigate these risks. The approach involves dividing a large neural network (NN) into several smaller NNs and distributing them across the end devices. The proposed method increases the security of learning systems by ensuring that all NNs on distributed end devices and entities involved in the learning process are engaged in joint learning and undergo continuous validation. However, the accuracy decreases in case of end device failure. To avoid a significant decrease in accuracy, we introduce a modifier module at the edge to improve results in the event of end device failure. This module is built using the NNs on the end devices. The experimental results show that the accuracy obtained in the case of the failure of one of the end devices can be improved by approximately 1.5% using the modifier module. This achievement enables the efficient performance of CNNs on edge devices and improves service delivery in areas such as healthcare and the use of self-driving vehicles.
Publisher
Research Square Platform LLC