Author:
Abeykoon Vibhatha,Charles Fox Geoffrey
Abstract
Over the past decade, data analytics has undergone significant transformation due to the increasing availability of data and the need to extract valuable insights from it. However, the classical big data stack needs to be faster in data engineering, highlighting the need for high-performance computing. Data analytics has motivated the engineering community to build diverse frameworks, including Apache Arrow, Apache Parquet, Twister2, Cylon, Velox, and Datafusion. These frameworks have been designed to provide high-performance data processing on C++-backed core APIs, with extended usability through support for Python and R. Our research focuses on the trends in the evolution of data engineering, which have been characterized by a tendency towards high-performance computing, with frameworks designed to keep up with the evolving demands of the field. Our findings show that the modern-day data analytics frameworks have been developed with C++ core compute and communication kernels and are designed to facilitate high-performance data processing. And this has been a critical motivation to develop scalable components for data engineering frameworks.
Reference45 articles.
1. Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. Communications of the ACM. 2017;(6):84-90
2. Introducing ChatGPT. Available from: [Accessed: March 5, 2023]
3. Hadoop. Apache. Available from: [Accessed: November 30, 2022]
4. Moritz P, Nishihara R, Wang S, Tumanov A, Liaw R, Liang E, et al. Ray: A distributed framework for emerging AI applications. In: 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18). 2018. pp. 561-577
5. Pedregosa F et al. Scikit-learn: Machine learning in python. The Journal of Machine Learning Research. 2011;:2825-2830