Author:
Cahya Indirman M. Didik,Wahyu Wiriasto Giri,Irfan Akbar L. Ahmad S.
Abstract
Hadoop and Apache Spark have become popular frameworks for distributed big data processing. This research aims to configure Hadoop and Spark for conducting training and testing on big data using distributed machine learning methods with MLlib, including linear regression and multi-linear regression. Additionally, an external library, LSTM, is used for experimentation. The experiments utilize three desktop devices to represent a series of tests on single and multi-node networks. Three datasets, namely bitcoin (3,613,767 rows), gold-price (5,585 rows), and housing-price (23,613 rows), are employed as case studies. The distributed computation tests are conducted by allocating uniform core processors on all three devices and measuring execution times, as well as RMSE and MAPE values. The results of the single-node tests using MLlib (both linear and multi-linear regression) with variations of core utilization ranging from 2 to 16 cores, show that the overall dataset performs optimally using 12 cores, with an execution time of 532.328 seconds. However, in the LSTM method, core allocation variations do not yield significant results and require longer program execution times. On the other hand, in the multinode (2) tests, optimal performance is achieved using 8 cores, with an execution time of 924.711 seconds, while in the multi-node (3) tests, the ideal configuration is 6 cores with an execution time of 881.495 seconds. In conclusion, without the involvement of HDFS, distributed MLlib programs cannot be processed, and core allocation depends on the number of nodes used and the size of the dataset.
Reference28 articles.
1. Hadoop Apache. (2023). [Online]. Available: https://hadoop.apache.org.
2. Spark Apache. (2023). [Online]. Available: https://spark.apache.org.
3. Notebook Jupyter. (2023). [Online]. Available: https://jupyter.org
4. Streamlit. (2023). [Online]. Available: https://streamlit.io
5. A parallelization model for performance characterization of Spark Big Data jobs on Hadoop clusters