alt="x element-of script í’³"/>. The constant
dictates the rate of convergence of the Markov chain. Ergodic Markov chains on finite state spaces are polynomially ergodic. On general state spaces, demonstrating at least polynomial ergodicity usually requires a separate study of the sampler, and we provide some references in
Section 6.
3.1 Means
Recall that . For MCMC sampling, a key quantity of interest will be
which we assume is positive‐definite. A CLT for a Monte Carlo average, , is available under both IID and MCMC sampling.
Theorem 1.
1 IID. Let . If , then, as ,
2 MCMC. Let be polynomially ergodic of order where such that , then if is positive‐definite, as ,
Typically, MCMC algorithms exhibit positive correlation implying that is larger . This naturally implies that MCMC simulations require more samples than IID simulations. Using Theorem 1 to assess the simulation reliability requires estimation of and , which we describe in Section 4 .
3.2 Quantiles
Let
An asymptotic distribution for sample quantiles is available under both IID Monte Carlo and MCMC.
Theorem 2.
Let be absolutely continuous, twice differentiable with density , and let be bounded within some neighborhood of .
1 IID. Let , then
2 MCMC. [11] If the Markov chain is polynomially ergodic of order and , then
The density value, , can be estimated using a Gaussian kernel density estimator. In addition, is replaced with , the univariate version of for . We present methods of estimating in Section 4 .
3.3 Other Estimators
For many estimators, a delta method argument can yield a limiting normal distribution. For example, a CLT for and a delta method argument yields an elementwise asymptotic distribution of . Let denote the th element of . If and denote the components of and , respectively, then the th diagonal of