Affiliation:
1. The University of Sheffield, United Kingdom
2. Ulm University, Germany
Abstract
From simulating galaxy formation to viral transmission in a pandemic, scientific models play a pivotal role in developing scientific theories and supporting government policy decisions that affect us all. Given these critical applications, a poor modelling assumption or bug could have far-reaching consequences. However, scientific models possess several properties that make them notoriously difficult to test, including a complex input space, long execution times, and non-determinism, rendering existing testing techniques impractical. In fields such as epidemiology, where researchers seek answers to challenging causal questions, a statistical methodology known as Causal inference has addressed similar problems, enabling the inference of causal conclusions from noisy, biased, and sparse data instead of costly experiments. This article introduces the causal testing framework: a framework that uses causal inference techniques to establish causal effects from existing data, enabling users to conduct software testing activities concerning the effect of a change, such as metamorphic testing,
a posteriori
. We present three case studies covering real-world scientific models, demonstrating how the causal testing framework can infer metamorphic test outcomes from reused, confounded test data to provide an efficient solution for testing scientific modelling software.
Publisher
Association for Computing Machinery (ACM)
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献