Author:
Park EunChan,Baek KyeongDeok,Cho Eunho,Ko In-Young
Abstract
With the increasing number of Web of Things devices, the network and processing delays in the cloud have also increased. As a solution, fog computing has emerged, placing computational resources closer to the user to lower the communication overhead and congestion in the cloud. In fog computing systems, microservices are deployed as containers, which require an orchestration tool like Kubernetes to support service discovery, placement, and recovery. A key challenge in the orchestration of microservices is automatically scaling the microservices in case of an unpredictable burst of load. In cloud computing, a centralized autoscaler can monitor the deployed microservice instances and make scaling actions based on the monitored metric values. However, monitoring an increasing number of microservices in fog computing can cause excessive network overhead and thereby delay the time to scaling action. We propose DESA, a fully DEcentralized Self-adaptive Autoscaler through which microservice instances make their own scaling decisions, cloning or terminating themselves through self-monitoring. We evaluate DESA in a simulated fog computing environment with different numbers of fog nodes. Furthermore, we conduct a case study with the 1998 World Cup website access log, examining DESA’s performance in a realistic scenario. The results show that DESA successfully reduces the scaling reaction time in large-scale fog computing systems compared to the centralized approach. Moreover, DESA resulted in a similar maximum number of instances and lower average CPU utilization during bursts of load.
Subject
Computer Networks and Communications,Information Systems,Software
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Cost-Aware Computation Offloading for Managing Cloud-Bursts in IoT-Based Cloud-Fog Networks;2024 1st International Conference on Cognitive, Green and Ubiquitous Computing (IC-CGU);2024-03-01