Abstract
AbstractSoftware performance concerns have been attracting research interest at an increasing rate, especially regarding energy performance in non-wired computing devices. In the context of mobile devices, several research works have been devoted to assessing the performance of software and its underlying code. One important contribution of such research efforts is sets of programming guidelines aiming at identifying efficient and inefficient programming practices, and consequently to steer software developers to write performance-friendly code.Despite recent efforts in this direction, it is still almost unfeasible to obtain universal and up-to-date knowledge regarding software and respective source code performance. Namely regarding energy performance, where there has been growing interest in optimizing software energy consumption due to the power restrictions of such devices. There are still many difficulties reported by the community in measuring performance, namely in large-scale validation and replication. The Android ecosystem is a particular example, where the great fragmentation of the platform, the constant evolution of the hardware, the software platform, the development libraries themselves, and the fact that most of the platform tools are integrated into the IDE’s GUI, makes it extremely difficult to perform performance studies based on large sets of data/applications. In this paper, we analyze the execution of a diversified corpus of applications of significant magnitude. We analyze the source-code performance of 1322 versions of 215 different Android applications, dynamically executed with over than 27900 tested scenarios, using state-of-the-art black-box testing frameworks with different combinations of GUI inputs. Our empirical analysis allowed to observe that semantic program changes such as adding functionality and repairing bugfixes are the changes more associated with relevant impact on energy performance. Furthermore, we also demonstrate that several coding practices previously identified as energy-greedy do not replicate such behavior in our execution context and can have distinct impacts across several performance indicators: runtime, memory and energy consumption. Some of these practices include some performance issues reported by the Android Lint and Android SDK APIs. We also provide evidence that the evaluated performance indicators have little to no correlation with the performance issues’ priority detected by Android Lint. Finally, our results allowed us to demonstrate that there are significant differences in terms of performance between the most used libraries suited for implementing common programming tasks, such as HTTP communication, JSON manipulation, image loading/rendering, among others, providing a set of recommendations to select the most efficient library for each performance indicator. Based on the conclusions drawn and in the extension of the developed work, we also synthesized a set of guidelines that can be used by practitioners to replicate energy studies and build more efficient mobile software.
Funder
Faculdade de Ciências e Tecnologia, Universidade Nova de Lisboa
Publisher
Springer Science and Business Media LLC