Abstract
AbstractWe study the accurate and efficient computation of the expected number of times each state is visited in discrete- and continuous-time Markov chains. To obtain sound accuracy guarantees efficiently, we lift interval iteration and topological approaches known from the computation of reachability probabilities and expected rewards. We further study applications of expected visiting times, including the sound computation of the stationary distribution and expected rewards conditioned on reaching multiple goal states. The implementation of our methods in the probabilistic model checker scales to large systems with millions of states. Our experiments on the quantitative verification benchmark set show that the computation of stationary distributions via expected visiting times consistently outperforms existing approaches — sometimes by several orders of magnitude.
Publisher
Springer Nature Switzerland
Reference60 articles.
1. Bacci, G., Bacci, G., Larsen, K.G., Mardare, R.: On the metric-based approximate minimization of Markov chains. In: ICALP. LIPIcs, vol. 80, pp. 104:1–104:14. Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2017)
2. Bacci, G., Ingólfsdóttir, A., Larsen, K.G., Reynouard, R.: Active learning of Markov decision processes using Baum-Welch algorithm. In: ICMLA. pp. 1203–1208. IEEE (2021)
3. Baier, C., Funke, F., Piribauer, J., Ziemek, R.: On probability-raising causality in Markov decision processes. In: FoSSaCS. Lecture Notes in Computer Science, vol. 13242, pp. 40–60. Springer (2022)
4. Baier, C., Haverkort, B.R., Hermanns, H., Katoen, J.: Model-checking algorithms for continuous-time Markov chains. IEEE Trans. Software Eng. 29(6), 524–541 (2003)
5. Baier, C., Katoen, J.: Principles of Model Checking. MIT Press (2008)