Abstract
The distribution theory for reward functions on semi-Markov processes has been of interest since the early 1960s. The relevant asymptotic distribution theory has been satisfactorily developed. On the other hand, it has been noticed that it is difficult to find exact distribution results which lead to the effective computation of such distributions. Note that there is no satisfactory exact distribution result for rewards accumulated over deterministic time intervals [0, t], even in the special case of continuous-time Markov chains. The present paper provides neat general results which lead to explicit closed-form expressions for the relevant Laplace transforms of general reward functions on semi-Markov and Markov additive processes.
Publisher
Cambridge University Press (CUP)
Subject
Statistics, Probability and Uncertainty,General Mathematics,Statistics and Probability
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献