Quantifying Cloud Performance and Dependability

Author:

Herbst NikolasORCID,Bauer André,Kounev Samuel,Oikonomou Giorgos,Eyk Erwin Van,Kousiouris George,Evangelinou Athanasia,Krebs Rouven,Brecht Tim,Abad Cristina L.,Iosup Alexandru

Abstract

In only a decade, cloud computing has emerged from a pursuit for a service-driven information and communication technology (ICT), becoming a significant fraction of the ICT market. Responding to the growth of the market, many alternative cloud services and their underlying systems are currently vying for the attention of cloud users and providers. To make informed choices between competing cloud service providers, permit the cost-benefit analysis of cloud-based systems, and enable system DevOps to evaluate and tune the performance of these complex ecosystems, appropriate performance metrics, benchmarks, tools, and methodologies are necessary. This requires re-examining old system properties and considering new system properties, possibly leading to the re-design of classic benchmarking metrics such as expressing performance as throughput and latency (response time). In this work, we address these requirements by focusing on four system properties: (i) elasticity of the cloud service, to accommodate large variations in the amount of service requested, (ii)  performance isolation between the tenants of shared cloud systems and resulting performance variability , (iii)  availability of cloud services and systems, and (iv) the operational risk of running a production system in a cloud environment. Focusing on key metrics for each of these properties, we review the state-of-the-art, then select or propose new metrics together with measurement approaches. We see the presented metrics as a foundation toward upcoming, future industry-standard cloud benchmarks.

Funder

Deutsche Forschungsgemeinschaft

Horizon 2020 Framework Programme

Nederlandse Organisatie voor Wetenschappelijk Onderzoek

Publisher

Association for Computing Machinery (ACM)

Subject

Computer Networks and Communications,Hardware and Architecture,Safety, Risk, Reliability and Quality,Media Technology,Information Systems,Software,Computer Science (miscellaneous)

Reference76 articles.

1. Conducting Repeatable Experiments in Highly Variable Cloud Computing Environments

2. Giuseppe Aceto et al. 2013. Cloud monitoring: A survey. Comput. Netw. 57 9 (2013). 10.1016/j.comnet.2013.04.001 Giuseppe Aceto et al. 2013. Cloud monitoring: A survey. Comput. Netw. 57 9 (2013). 10.1016/j.comnet.2013.04.001

3. Amazon. 2017. EC2 Compute SLA. Retrieved from http://aws.amazon.com/ec2/sla/. Amazon. 2017. EC2 Compute SLA. Retrieved from http://aws.amazon.com/ec2/sla/.

4. Microservices Architecture Enables DevOps: Migration to a Cloud-Native Architecture

Cited by 22 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3