Abstract
Las pruebas de rendimiento son determinantes para medir la calidad de los desarrollos de software, ya que permiten identificar aspectos que se deben mejorar en pro de alcanzar la satisfacción del cliente. El objetivo de este trabajo fue identificar la técnica óptima de Machine Learning para predecir si un desarrollo de software cumple o no con los criterios de aceptación del cliente. Se empleó una base de datos de información obtenida en pruebas de rendimiento a servicios web y la métrica de calidad F1-score. Se concluye que, a pesar de que la técnica de Random Forest obtuvo el mejor puntaje, no es correcto afirmar que sea la mejor técnica de Machine Learning; la cantidad y la calidad de los datos empleados en el entrenamiento desempeñan un papel de gran importancia, al igual que un procesamiento adecuado de la información.
Performance tests are crucial to measure the quality of software developments, since they allow identifying aspects to be improved in order to achieve customer satisfaction. The objective of this research was to identify the optimal Machine Learning technique to predict whether or not a software development meets the customer's acceptance criteria. A dataset with information obtained from web services performance tests and the F1-score quality metric were used. This paper concludes that, although the Random Forest technique obtained the best score, it is not correct to state that it is the best Machine Learning technique; the quantity and quality of the data used in the training play a very important role, as well as an adequate processing of the information.
Publisher
Politecnico Colombiano Jaime Isaza Cadavid
Reference33 articles.
1. ISO/IEC. (2011). BSI Standards Publication Systems and software engineering - Systems and software Quality Requirements and Evaluation (SQuaRE) - System and software quality models. BSI Standards Publication. https://www.iso.org/standard/35733.html
2. Apte, V., Devidas, T. V. S. V., Akhilesh, G., & Anshul, K. (2017). AutoPerf : Automated Load Testing and Resource Usage Profiling of Multi-Tier Internet Applications. Proceedings of the 8th ACM/SPEC on Interna-tional Conference on Performance Engineering, 115-126. https://doi.org/10.1145/3030207.3030222
3. Bezemer, C., Eismann, S., Ferme, V., Grohmann, J., Heinrich, R., Jamshidi, P., Shang, W., Hoorn, A. Van, Villavicencio, M., Walter, J., & Willnecker, F. (2019). How is Performance Addressed in DevOps? A Survey on Industrial Practices. 45-50. https://doi.org/10.1145/3297663.3309672
4. Mohammad, R., & Hamad, H. (2019). Scalable Quality and Testing Lab (SQTL): Mission- Critical Applica-tions Testing. 2019 International Conference on Computer and Information Sciences (ICCIS), 1-7. https://doi.org/10.1109/ICCISci.2019.8716404
5. Arif, M.M., Shang, W. & Shihab, E. Empirical study on the discrepancy between performance testing re-sults from virtual and physical environments. Empir Software Eng 23, 1490-1518 (2018). https://doi.org/10.1007/s10664-017-9553-x
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献