On the Analysis of Inter-Relationship between Auto-Scaling Policy and QoS of FaaS Workloads
Author:
Hong Sara1, Kim Yeeun1, Nam Jaehyun2ORCID, Kim Seongmin1ORCID
Affiliation:
1. Department of Convergence Security Engineering, Sungshin Women’s University, 2, Bomun-ro 34da-gil, Seongbuk-gu, Seoul 02844, Republic of Korea 2. Department of Computer Engineering, Dankook University, 152, Jukjeon-ro, Suji-gu, Yongin-si 16890, Republic of Korea
Abstract
A recent development in cloud computing has introduced serverless technology, enabling the convenient and flexible management of cloud-native applications. Typically, the Function-as-a-Service (FaaS) solutions rely on serverless backend solutions, such as Kubernetes (K8s) and Knative, to leverage the advantages of resource management for underlying containerized contexts, including auto-scaling and pod scheduling. To take the advantages, recent cloud service providers also deploy self-hosted serverless services by facilitating their on-premise hosted FaaS platforms rather than relying on commercial public cloud offerings. However, the lack of standardized guidelines on K8s abstraction to fairly schedule and allocate resources on auto-scaling configuration options for such on-premise hosting environment in serverless computing poses challenges in meeting the service level objectives (SLOs) of diverse workloads. This study fills this gap by exploring the relationship between auto-scaling behavior and the performance of FaaS workloads depending on scaling-related configurations in K8s. Based on comprehensive measurement studies, we derived the logic as to which workload should be applied and with what type of scaling configurations, such as base metric, threshold to maximize the difference in latency SLO, and number of responses. Additionally, we propose a methodology to assess the scaling efficiency of the related K8s configurations regarding the quality of service (QoS) of FaaS workloads.
Funder
National Research Foundation of Korea MSIT under the ICAN (ICT Challenge and Advanced Network of HRD) program
Reference51 articles.
1. Villamizar, M., Garces, O., Ochoa, L., Castro, H., Salamanca, L., Verano, M., Casallas, R., Gil, S., Valencia, C., and Zambrano, A. (2016, January 16–19). Infrastructure cost comparison of running web applications in the cloud using AWS lambda and monolithic and microservice architectures. Proceedings of the 2016 16th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid), Cartagena, Colombia. 2. Lynn, T., Rosati, P., Lejeune, A., and Emeakaroha, V. (2017, January 11–14). A preliminary review of enterprise serverless cloud computing (function-as-a-service) platforms. Proceedings of the 2017 IEEE International Conference on Cloud Computing Technology and Science (CloudCom), Hong Kong, China. 3. McGrath, G., and Brenner, P.R. (2017, January 5–8). Serverless computing: Design, implementation, and performance. Proceedings of the 2017 IEEE 37th International Conference on Distributed Computing Systems Workshops (ICDCSW), Atlanta, GA, USA. 4. Manner, J., Endreß, M., Heckel, T., and Wirtz, G. (2018, January 17–20). Cold start influencing factors in function as a service. Proceedings of the 2018 IEEE/ACM International Conference on Utility and Cloud Computing Companion (UCC Companion), Zurich, Switzerland. 5. Pérez, A., Risco, S., Naranjo, D.M., Caballer, M., and Moltó, G. (2019, January 8–13). On-premises serverless computing for event-driven data processing applications. Proceedings of the 2019 IEEE 12th International Conference on Cloud Computing (CLOUD), Milan, Italy.
|
|