Affiliation:
1. Instituto de Computação Universidade Federal Fluminense (UFF) Niterói Brazil
2. Department of Computer Science Universidade de Brasília (UnB) Brasília Brazil
3. MINES ParisTech‐PSL/CRI Paris France
Abstract
SummaryCloud computing is currently one of the prime choices in the computing infrastructure landscape. In addition to advantages such as the pay‐per‐use bill model and resource elasticity, there are technical benefits regarding heterogeneity and large‐scale configuration. Alongside the classical need for performance, for example, time, space, and energy, there is an interest in the financial cost that might come from budget constraints. Based on scalability considerations and the pricing model of traditional public clouds, a reasonable optimization strategy output could be the most suitable configuration of virtual machines to run a specific workload. From the perspective of runtime and monetary cost optimizations, we provide the adaptation of a Hadoop applications execution cost model extracted from the literature aiming at Spark applications modeled with the MapReduce paradigm. We evaluate our optimizer model executing an improved version of the Diff Sequences Spark application to perform SARS‐CoV‐2 coronavirus pairwise sequence comparisons using the AWS EC2's virtual machine instances. The experimental results with our model outperformed 80% of the random resource selection scenarios. By only employing spot worker nodes exposed to revocation scenarios rather than on‐demand workers, we obtained an average monetary cost reduction of 35.66% with a slight runtime increase of 3.36%.
Funder
Conselho Nacional de Desenvolvimento Científico e Tecnológico
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Fundação Carlos Chagas Filho de Amparo à Pesquisa do Estado do Rio de Janeiro
Subject
Computational Theory and Mathematics,Computer Networks and Communications,Computer Science Applications,Theoretical Computer Science,Software