Abstract
We propose two numerical schemes for approximating quasi-stationary distributions (QSD) of finite state Markov chains with absorbing states. Both schemes are described in terms of certain interacting chains in which the interaction is given in terms of the total time occupation measure of all particles in the system and has the impact of reinforcing transitions, in an appropriate fashion, to states where the collection of particles has spent more time. The schemes can be viewed as combining the key features of the two basic simulation-based methods for approximating QSD originating from the works of Fleming and Viot (1979) and Aldous, Flannery and Palacios (1998), respectively. The key difference between the two schemes studied here is that in the first method one starts with a(n) particles at time 0 and number of particles stays constant over time whereas in the second method we start with one particle and at most one particle is added at each time instant in such a manner that there are a(n) particles at time n. We prove almost sure convergence to the unique QSD and establish Central Limit Theorems for the two schemes under the key assumption that a(n) = o(n). When a(n) ~ n, the fluctuation behavior is expected to be non-standard. Some exploratory numerical results are presented to illustrate the performance of the two approximation schemes.
Subject
Statistics and Probability