Affiliation:
1. Department of Mathematics , Linköping University , Linköping , 58183 , Sweden
2. School of Reliability and Systems Engineering , Beihang University , Beijing , 100083 , P. R. China
Abstract
Abstract
We continue our investigation on general large deviation principles (LDPs) for longest runs. Previously, a general LDP for the longest success run
in a sequence of independent Bernoulli trails was derived in [Z. Liu and X. Yang,
A general large deviation principle for longest runs,
Statist. Probab. Lett. 110 2016, 128–132]. In the present note, we establish a general LDP for the longest success run
in a two-state (success or failure) Markov chain which recovers the previous result in the aforementioned paper. The main new ingredient is to implement suitable estimates of the distribution function of the longest success run recently established in [Z. Liu and X. Yang,
On the longest runs in Markov chains,
Probab. Math. Statist. 38 2018, 2, 407–428].
Subject
Applied Mathematics,Computational Theory and Mathematics,Statistics, Probability and Uncertainty,Mathematical Physics
Reference9 articles.
1. R. C. Bollinger and A. A. Salvia,
K-in-a-row failure networks,
IEEE Trans. Reliability 31 (1982), 53–56.
2. W. O. Bryc,
Large deviations by the asymptotic value method,
Diffusion Processes and Related Problems in Analysis. Vol. I (Evanston 1989),
Progr. Probab. 22,
Birkhäuser, Boston (1990), 447–472.
3. D. Chaing and S. C. Niu,
Reliability ofconsecutive k-out-of-n,
IEEE Trans. Reliability 30 (1981), 87–89.
4. A. Dembo and O. Zeitouni,
Large Deviations Techniques and Applications,
Stoch. Model. Appl. Probab. 38,
Springer, Berlin, 2010.
5. J. C. Fu, L. Wang and W. Y. W. Lou,
On exact and large deviation approximation for the distribution of the longest run in a sequence of two-state Markov dependent trials,
J. Appl. Probab. 40 (2003), no. 2, 346–360.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献