Publisher
Springer International Publishing
Reference32 articles.
1. Ashok, P., Butkova, Y., Hermanns, H., Křetínský, J.: Continuous-Time Markov Decisions Based on Partial Exploration. ArXiv e-prints (2018). https://arxiv.org/abs/1807.09641
2. Ashok, P., Chatterjee, K., Daca, P., Kretínský, J., Meggendorfer, T.: Value iteration for long-run average reward in Markov decision processes. In: CAV (2017)
3. Aziz, A., Sanwal, K., Singhal, V., Brayton, R.K.: Verifying continuous time Markov chains. In: CAV (1996)
4. Bartocci, E., Bortolussi, L., Brázdil, T., Milios, D., Sanguinetti, G.: Policy learning in continuous-time Markov decision processes using gaussian processes. Perform. Eval. 116, 84–100 (2017)
5. Brázdil, T., et al.: Verification of Markov decision processes using learning algorithms. In: ATVA (2014)
Cited by
12 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Fast Parametric Model Checking With Applications to Software Performability Analysis;IEEE Transactions on Software Engineering;2023-10-01
2. Under-Approximating Expected Total Rewards in POMDPs;Tools and Algorithms for the Construction and Analysis of Systems;2022
3. The Modest State of Learning, Sampling, and Verifying Strategies;Leveraging Applications of Formal Methods, Verification and Validation. Adaptation and Learning;2022
4. A Modest Approach to Markov Automata;ACM Transactions on Modeling and Computer Simulation;2021-07-31
5. Markov automata with multiple objectives;Formal Methods in System Design;2021-03-29