Affiliation:
1. Cyberspace Security and Computer College, Hebei University, Baoding 071000, China
Abstract
Apache Spark is a high-speed computing engine for processing massive data. With its widespread adoption, there is a growing need to analyze its correctness and temporal properties. However, there is scarce research focused on the verification of temporal properties in Spark programs. To address this gap, we employ the code-level runtime verification tool UMC4M based on the Modeling, Simulation, and Verification Language (MSVL). To this end, a Spark program S has to be translated into an MSVL program M, and the negation of the property P specified by a Propositional Projection Temporal Logic (PPTL) formula that needs to be verified is also translated to an MSVL program M1; then, a new MSVL program “M and M1” can be compiled and executed. Whether program S violates the property P is determined by the existence of an acceptable execution of “M and M1”. Thus, the key issue lies in how to formalize model Spark programs using MSVL programs. We previously proposed a solution to this problem—using the MSVL functions to perform Resilient Distributed Datasets (RDD) operations and converting the Spark program into an MSVL program based on the Directed Acyclic Graph (DAG) of the Spark program. However, we only proposed this idea. Building upon this foundation, we implement the conversion from RDD operations to MSVL functions and propose, as well as implement, the rules for translating Spark programs to MSVL programs based on DAG. We confirm the feasibility of this approach and provide a viable method for verifying the temporal properties of Spark programs. Additionally, an automatic translation tool, S2M, is developed. Finally, a case study is presented to demonstrate this conversion process.
Funder
Hebei Natural Science Foundation
Science and Technology Research Project of Higher Education in Hebei Province
Advanced Talents Incubation Program of the Hebei University
Reference31 articles.
1. Performance analysis of distributed computing frameworks for big data analytics: Hadoop vs. spark;Ketu;Comput. Sist.,2020
2. A comprehensive bibliometric analysis of Apache Hadoop from 2008 to 2020;Zhang;Int. J. Intell. Comput. Cybern.,2023
3. Apache spark: A unified engine for big data processing;Zaharia;Commun. ACM,2016
4. Chambers, B., and Zaharia, M. (2018). Spark: The Definitive Guide: Big Data Processing MADE Simple, O’Reilly Media.
5. Analysis of hadoop MapReduce scheduling in heterogeneous environment;Kalia;Ain Shams Eng. J.,2021
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献