Flexible Deployment of Machine Learning Inference Pipelines in the Cloud–Edge–IoT Continuum
-
Published:2024-05-11
Issue:10
Volume:13
Page:1888
-
ISSN:2079-9292
-
Container-title:Electronics
-
language:en
-
Short-container-title:Electronics
Author:
Bogacka Karolina12ORCID, Sowiński Piotr12ORCID, Danilenka Anastasiya12ORCID, Biot Francisco Mahedero3ORCID, Wasielewska-Michniewska Katarzyna1ORCID, Ganzha Maria12ORCID, Paprzycki Marcin1ORCID, Palau Carlos E.3ORCID
Affiliation:
1. Systems Research Institute, Polish Academy of Sciences, ul. Newelska 6, 01-447 Warsaw, Poland 2. Faculty of Mathematics and Information Science, Warsaw University of Technology, ul. Koszykowa 75, 00-662 Warsaw, Poland 3. Communications Department, Universitat Politècnica de València, Camí de Vera, s/n, 46022 Valencia, Spain
Abstract
Currently, deploying machine learning workloads in the Cloud–Edge–IoT continuum is challenging due to the wide variety of available hardware platforms, stringent performance requirements, and the heterogeneity of the workloads themselves. To alleviate this, a novel, flexible approach for machine learning inference is introduced, which is suitable for deployment in diverse environments—including edge devices. The proposed solution has a modular design and is compatible with a wide range of user-defined machine learning pipelines. To improve energy efficiency and scalability, a high-performance communication protocol for inference is propounded, along with a scale-out mechanism based on a load balancer. The inference service plugs into the ASSIST-IoT reference architecture, thus taking advantage of its other components. The solution was evaluated in two scenarios closely emulating real-life use cases, with demanding workloads and requirements constituting several different deployment scenarios. The results from the evaluation show that the proposed software meets the high throughput and low latency of inference requirements of the use cases while effectively adapting to the available hardware. The code and documentation, in addition to the data used in the evaluation, were open-sourced to foster adoption of the solution.
Funder
European Commission Horizon Europe project aerOS
Reference68 articles.
1. Future Edge Cloud and Edge Computing for Internet of Things Applications;Pan;IEEE Internet Things J.,2018 2. Ferry, N., Dautov, R., and Song, H. (2022, January 16–19). Towards a model-based serverless platform for the cloud-edge-iot continuum. Proceedings of the 2022 22nd IEEE International Symposium on Cluster, Cloud and Internet Computing (CCGrid), Taormina, Italy. 3. Baier, L., Jöhren, F., and Seebacher, S. (2019, January 8–14). Challenges in the Deployment and Operation of Machine Learning in Practice. Proceedings of the ECIS 2019—27th European Conference on Information Systems, Stockholm & Uppsala, Sweden. 4. Crankshaw, D., Sela, G.E., Mo, X., Zumar, C., Stoica, I., Gonzalez, J., and Tumanov, A. (2020, January 26–30). InferLine: Latency-Aware Provisioning and Scaling for Prediction Serving Pipelines. Proceedings of the 11th ACM Symposium on Cloud Computing, New York, NY, USA. 5. Zhang, C., Yu, M., Wang, W., and Yan, F. (2019, January 10–12). MArk: Exploiting Cloud Services for Cost-Effective, SLO-Aware Machine Learning Inference Serving. Proceedings of the 2019 USENIX Annual Technical Conference (USENIX ATC 19), Renton, WA, USA.
|
|