Abstract
The research aims to comprehensively examine the current state of knowledge in data flow testing (DFT) to identify knowledge gaps and inform future research. The authors undertook this research question to advance the practice of DFT and improve software systems’ quality and reliability. The authors analyze several state-of-the-art techniques, including the correlation tree concept and particle swarm optimization algorithm, leveraging the data flow knowledge during test execution, as well as neural networks and genetic algorithms. The authors also discuss the methods used to evaluate the effectiveness and accuracy of the various techniques, including case studies, simulations, and test data-generating techniques. The authors aim to provide a superficial understanding of the field’s current state and note that future work could focus on in-depth analysis of specific areas within DFT. Future researchers can use this research to gain a deeper understanding of current algorithms and work on improving them. This is particularly important as DFT is an essential part of the testing process for any software.