1. Akaike, H. (1973). Information theory and an extension of the maximum likelihood principle. In B.N. Petrov & F. Csaki (Eds.), Proceedings of the Second International Symposium on Information Theory (pp. 267–281). Budapest: Akad. Kiado.
2. Brooks, S.P., & Gelman, A. (1998). General methods of monitoring convergence of iterative simulations. Journal of Computational and Graphical Statistics, 7, 434–455.
3. Casella, G., & George, E.I. (1992). Explaining the Gibbs sampler. The American Statistician, 46, 167–174.
4. Celeux, G., Forbers, F., Robert, C.P., & Titterington, D.M. (2006). Deviance information criteria for missing data models. Bayesian Analysis, 1, 651–674.
5. Chib, S., & Greenberg, E. (1995). Understanding the Metropolis-Hastings algorithm. The American Statistician, 49, 327–335.