next up previous contents index
Next: 3.4 The Gibbs Sampling Up: 3. Markov Chain Monte Previous: 3.2 Markov Chains

Subsections



3.3 Metropolis-Hastings Algorithm

This powerful algorithm provides a general approach for producing a correlated sequence of draws from the target density that may be difficult to sample by a classical independence method. The goal is to simulate the $ d$-dimensional distribution $ \pi ^{\ast}(\boldsymbol{
\psi})$, $ \boldsymbol{\psi} \in \Psi \subseteq \Re ^{d}$ that has density $ \pi (\boldsymbol{\psi})$ with respect to some dominating measure. To define the algorithm, let $ q(\boldsymbol{\psi},\boldsymbol{\psi}^{\prime})$ denote a source density for a candidate draw $ \boldsymbol{\psi}^{\prime}$ given the current value $ \psi $ in the sampled sequence. The density $ q(\boldsymbol{\psi},\boldsymbol{\psi}^{\prime})$ is referred to as the proposal or candidate generating density. Then, the M-H algorithm is defined by two steps: a first step in which a proposal value is drawn from the candidate generating density and a second step in which the proposal value is accepted as the next iterate in the Markov chain according to the probability $ \alpha(\boldsymbol{\psi},\boldsymbol{\psi}^{\prime})$, where

$\displaystyle \alpha (\boldsymbol{\psi},\boldsymbol{\psi}^{\prime})= \begin{cas...
...l{\psi},\boldsymbol{\psi}^{\prime})>0\;; \\ 1 & \text{otherwise\;.} \end{cases}$ (3.10)

If the proposal value is rejected, then the next sampled value is taken to be the current value. In algorithmic form, the simulated values are obtained by the following recursive procedure.

Algorithm 1 (Metropolis-Hastings)  
  1. Specify an initial value $ \boldsymbol{\psi}^{(0)}$:

  2. Repeat for $ j=1,2,\ldots,M$.
    a)
    Propose

    $\displaystyle \boldsymbol{\psi}^{\prime}\sim q\left(\boldsymbol{\psi}^{(j)},\cdot\right)$    

    b)
    Let

    $\displaystyle \boldsymbol{\psi}^{(j+1)}= \begin{cases}\boldsymbol{\psi}^{\prime...
...{\prime}\right)\;; \\ \boldsymbol{\psi}^{(j)} & \text{otherwise\:.} \end{cases}$    

  3. Return the values $ \left\{\boldsymbol{\psi}^{(1)},\boldsymbol{\psi}^{(2)}, \ldots,
\boldsymbol{\psi}^{(M)}\right\}$.

Typically, a certain number of values (say $ n_{0}$) at the start of this sequence are discarded after which the chain is assumed to have converged to it invariant distribution and the subsequent draws are taken as approximate variates from $ \pi $. Because theoretical calculation of the burn-in is not easy it is important that the proposal density is chosen to ensure that the chain makes large moves through the support of the invariant distribution without staying at one place for many iterations. Generally, the empirical behavior of the M-H output is monitored by the autocorrelation time of each component of $ \boldsymbol{\psi}$ and by the acceptance rate, which is the proportion of times a move is made as the sampling proceeds.

One should observe that the target density appears as a ratio in the probability $ \alpha(\boldsymbol{\psi},\boldsymbol{\psi}^{\prime})$ and therefore the algorithm can be implemented without knowledge of the normalizing constant of $ \pi (\cdot )$. Furthermore, if the candidate-generating density is symmetric, i.e. $ q(\boldsymbol{\psi},\boldsymbol{\psi}^{\prime})=
q(\boldsymbol{\psi}^{\prime},\boldsymbol{\psi})$, the acceptance probability only contains the ratio $ \pi (\boldsymbol{\psi}^{\prime})/\pi
(\boldsymbol{\psi})$; hence, if $ \pi (\boldsymbol{\psi}^{\prime})\geq \pi
(\boldsymbol{\psi})$, the chain moves to $ \boldsymbol{\psi}^{\prime}$, otherwise it moves with probability given by $ \pi (\boldsymbol{\psi}^{\prime})/\pi
(\boldsymbol{\psi})$. The latter is the algorithm originally proposed by [39]. This version of the algorithm is illustrated in Fig. 3.1.

Figure 3.1: Original Metropolis algorithm: higher density proposal is accepted with probability one and the lower density proposal with probability $ \alpha $
\includegraphics[width=9cm]{text/2-3/fig2.eps}

Different proposal densities give rise to specific versions of the M-H algorithm, each with the correct invariant distribution $ \pi $. One family of candidate-generating densities is given by $ q(\boldsymbol{\psi},\boldsymbol{\psi}^{\prime}) = q(\boldsymbol{\psi}^{\prime} -
\boldsymbol{\psi})$. The candidate $ \boldsymbol{\psi}^{\prime}$ is thus drawn according to the process $ \boldsymbol{\psi}^{\prime}=\boldsymbol{\psi}+\boldsymbol{z}$, where $ \boldsymbol{z}$ follows the distribution $ q$. Since the candidate is equal to the current value plus noise, this case is called a random walk M-H chain. Possible choices for $ q$ include the multivariate normal density and the multivariate-$ t$. The random walk M-H chain is perhaps the simplest version of the M-H algorithm (and was the one used by [39]) and popular in applications. One has to be careful, however, in setting the variance of  $ \boldsymbol{z}$; if it is too large it is possible that the chain may remain stuck at a particular value for many iterations while if it is too small the chain will tend to make small moves and move inefficiently through the support of the target distribution. In both cases the generated draws that will be highly serially correlated. Note that when $ q$ is symmetric, $ q(\boldsymbol{z})=q(-\boldsymbol{z})$ and the probability of move only contains the ratio $ \pi (\boldsymbol{\psi}^{\prime})/\pi
(\boldsymbol{\psi})$. As mentioned earlier, the same reduction occurs if $ q(\boldsymbol{\psi},\boldsymbol{\psi}^{\prime})=
q(\boldsymbol{\psi}^{\prime},\boldsymbol{\psi})$.

[32] considers a second family of candidate-generating densities that are given by the form $ q(\boldsymbol{\psi},\boldsymbol{\psi}^{\prime})=q(\boldsymbol{y})$. [59] refers to this as an independence M-H chain because, in contrast to the random walk chain, the candidates are drawn independently of the current location  $ \boldsymbol{\psi}$. In this case, the probability of move becomes

$\displaystyle \alpha (\boldsymbol{\psi},\boldsymbol{\psi}^{\prime})= \min \left\{ \frac{w(\boldsymbol{\psi}^{\prime})}{w(\boldsymbol{\psi})},1\right\}\;;$    

where $ w(\boldsymbol{\psi})=\pi (\boldsymbol{\psi})/q(\boldsymbol{\psi})$ is the ratio of the target and proposal densities. For this method to work and not get stuck in the tails of $ \pi $, it is important that the proposal density have thicker tails than $ \pi $. A similar requirement is placed on the importance sampling function in the method of importance sampling ([27]). In fact, [38] show that if $ w(\boldsymbol{\psi})$ is uniformly bounded then the resulting Markov chain is ergodic.

Chib and Greenberg (1994, 1995) discuss a way of formulating proposal densities in the context of time series autoregressive-moving average models that has a bearing on the choice of proposal density for the independence M-H chain. They suggest matching the proposal density to the target at the mode by a multivariate normal or multivariate-$ t$ distribution with location given by the mode of the target and the dispersion given by inverse of the Hessian evaluated at the mode. Specifically, the parameters of the proposal density are taken to be

$\displaystyle \boldsymbol{m} =\arg \max \log \pi (\boldsymbol{\psi})$   and    
$\displaystyle \boldsymbol{V} =\tau \left\{ -\frac{\partial ^{2}\log \pi (\bolds...
...al \boldsymbol{\psi}^{\prime}}\right\} _{\boldsymbol{ \psi}=\hat{\psi}}^{-1}\;,$ (3.11)

where $ \tau$ is a tuning parameter that is adjusted to control the acceptance rate. The proposal density is then specified as $ q(\boldsymbol{\psi}^{\prime}) =
f(\boldsymbol{\psi}^{\prime}\vert\boldsymbol{m},\boldsymbol{V})$, where $ f$ is some multivariate density. This may be called a tailored M-H chain.

Another way to generate proposal values is through a Markov chain version of the accept-reject method. In this version, due to [59], and considered in detail by [14], a pseudo accept-reject step is used to generate candidates for an M-H algorithm. Suppose $ c>0$ is a known constant and $ h(\boldsymbol{\psi})$ a source density. Let $ C=\{\boldsymbol{\psi}:\pi (\boldsymbol{\psi})\leq
ch(\boldsymbol{\psi})\}$ denote the set of value for which $ ch(\boldsymbol{\psi})$ dominates the target density and assume that this set has high probability under $ \pi ^{\ast}$. Given $ \boldsymbol{\psi}^{(n)}=\boldsymbol{\psi}$, the next value $ \boldsymbol{\psi}^{(n+1)}$ is obtained as follows: First, a candidate value $ \boldsymbol{\psi}^{\prime}$ is obtained, independent of the current value  $ \boldsymbol{\psi}$, by applying the accept-reject algorithm with $ ch(\cdot )$ as the ''pseudo dominating'' density. The candidates $ \boldsymbol{\psi}^{\prime}$ that are produced under this scheme have density $ q(\boldsymbol{\psi}^{\prime})\propto \min \{\pi (\boldsymbol{\psi}
^{\prime}),ch(\boldsymbol{\psi}^{\prime})\}$. If we let $ w(\boldsymbol{\psi})
= c^{-1}\pi (\boldsymbol{\psi})/h(\boldsymbol{\psi})$ then it can be shown that the M-H probability of move is given by

$\displaystyle \alpha (\boldsymbol{\psi},\boldsymbol{\psi}^{\prime})= \begin{cas...
...ldsymbol{\psi}\notin C\;,\quad\boldsymbol{\psi}^{\prime}\notin C \end{cases}\;.$ (3.12)

3.3.1 Convergence Results

In the M-H algorithm the transition kernel of the chain is given by

$\displaystyle P(\boldsymbol{\psi},d\boldsymbol{\psi}^{\prime})=q(\boldsymbol{\p...
...r(\boldsymbol{\psi})\delta _{\boldsymbol{\psi}}(d\boldsymbol{\psi}^{\prime})\;,$ (3.13)

where $ \delta _{\boldsymbol{\psi}}(d\boldsymbol{\psi}^{\prime})=1$ if $ \boldsymbol{\psi}\in d\boldsymbol{\psi}^{\prime}$ and 0 otherwise and

$\displaystyle r(\boldsymbol{\psi})=1-\int\limits_\Omega q(\boldsymbol{\psi},\bo...
...(\boldsymbol{\psi},\boldsymbol{\psi}^{\prime})\,d\boldsymbol{\psi}^{\prime }\;.$    

Thus, transitions from $ \boldsymbol{\psi}$ to $ \boldsymbol{\psi}^{\prime}$ ( $ \boldsymbol{\psi}^{\prime}\neq \boldsymbol{\psi}$) are made according to the density

$\displaystyle p(\boldsymbol{\psi},\boldsymbol{\psi}^{\prime})\equiv q(\boldsymb...
...ldsymbol{\psi}^{\prime}),\quad \boldsymbol{\psi}\neq \boldsymbol{\psi}^{\prime}$    

while transitions from $ \boldsymbol{\psi}$ to $ \boldsymbol{\psi}$ occur with probability  $ r(\boldsymbol{\psi})$. In other words, the density function implied by this transition kernel is of mixed type,

$\displaystyle K(\boldsymbol{\psi},\boldsymbol{\psi}^{\prime})=q(\boldsymbol{\ps...
...r(\boldsymbol{\psi}) \delta _{\boldsymbol{\psi}}(\boldsymbol{\psi}^{\prime})\;,$ (3.14)

having both a continuous and discrete component, where now, with change of notation, $ \delta _{\boldsymbol{\psi}}(\boldsymbol{\psi}^{\prime})$ is the Dirac delta function defined as $ \delta
_{\boldsymbol{\psi}}(\boldsymbol{\psi}^{\prime })=0$ for $ \boldsymbol{\psi}^{\prime}\neq \boldsymbol{\psi}$ and $ \int_{\Omega} \delta
_{\boldsymbol{\psi}}(\boldsymbol{\psi}^{\prime})d\boldsymbol{\psi}^{\prime }=1$.

[14] provide a way to derive and interpret the probability of move $ \alpha(\boldsymbol{\psi},\boldsymbol{\psi}^{\prime})$. Consider the proposal density $ q(\boldsymbol{\psi},\boldsymbol{\psi}^{\prime})$. This proposal density $ q$ is not likely to be reversible for $ \pi $ (if it were then we would be done and M-H sampling would not be necessary). Without loss of generality, suppose that $ \pi
(\boldsymbol{\psi})q(\boldsymbol{\psi}, \boldsymbol{\psi}^{\prime})>\pi
(\boldsymbol{\psi}^{\prime})q(\boldsymbol{\psi} ^{\prime},\boldsymbol{\psi})$ implying that the rate of transitions from  $ \boldsymbol{\psi}$ to $ \boldsymbol{\psi}^{\prime}$ exceed those in the reverse direction. To reduce the transitions from  $ \boldsymbol{\psi}$ to  $ \boldsymbol{\psi}^{\prime}$ one can introduce a function $ 0\leq \alpha (\boldsymbol{\psi},
\boldsymbol{\psi}^{\prime})\leq 1$ such that $ \pi
(\boldsymbol{\psi})q(\boldsymbol{ \psi},\boldsymbol{\psi}^{\prime})\alpha
...
...\pi
(\boldsymbol{\psi}^{\prime})q(\boldsymbol{\psi}^{\prime},\boldsymbol{\psi})$. Solving for $ \alpha(\boldsymbol{\psi},\boldsymbol{\psi}^{\prime})$ yields the probability of move in the M-H algorithm. This calculation reveals the important point that the function $ p(\boldsymbol{\psi},\boldsymbol{\psi
}^{\prime})=q(\boldsymbol{\psi},\boldsymbol{\psi}^{\prime})\alpha
(\boldsymbol{\psi} ,\boldsymbol{\psi}^{\prime})$ is reversible by construction, i.e., it satisfies the condition

$\displaystyle q(\boldsymbol{\psi},\boldsymbol{\psi}^{\prime})\alpha (\boldsymbo...
...oldsymbol{\psi}^{\prime},\boldsymbol{\psi})\pi (\boldsymbol{\psi} ^{\prime})\;.$ (3.15)

It immediately follows, therefore, from the argument in (3.6) that the M-H kernel has $ \pi (\boldsymbol{\psi})$ as its invariant density.

It is not difficult to provide conditions under which the Markov chain generated by the M-H algorithm satisfies the conditions of Propositions 1-2. The conditions of Proposition 1 are satisfied by the M-H chain if $ q(\boldsymbol{\psi},\boldsymbol{\psi}^{\prime})$ is positive for $ (\boldsymbol{\psi},\boldsymbol{\psi}^{\prime})$ and continuous and the set $ \boldsymbol{\psi}$ is connected. In addition, the conditions of Proposition 2 are satisfied if $ q$ is not reversible (which is the usual situation) which leads to a chain that is aperiodic. Conditions for ergodicity, required for use of the central limit theorem, are satisfied if in addition $ \pi $ is bounded. Other similar conditions are provided by [50].


3.3.2 Example

To illustrate the M-H algorithm, consider the binary response data in Table 3.1, taken from Fahrmeir and Tutz (1997), on the occurrence or non-occurrence of infection following birth by caesarean section. The response variable $ y$ is one if the caesarean birth resulted in an infection, and zero if not. There are three covariates: $ x_{1} $, an indicator of whether the caesarean was non-planned; $ x_{2}$, an indicator of whether risk factors were present at the time of birth and $ x_{3}$, an indicator of whether antibiotics were given as a prophylaxis. The data in the table contains information from $ 251$ births. Under the column of the response, an entry such as $ 11/87$ means that there were $ 98$ deliveries with covariates $ (1,1,1)$ of whom $ 11$ developed an infection and $ 87$ did not.


Table 3.1: Caesarean infection data
$ Y$ $ (1/0)$ $ x_1$ $ x_2$ $ x_3$
$ 11/87$ $ 1$ $ 1$ $ 1$
$ 1/17$ 0 $ 1$ $ 1$
$ 0/2$ 0 0 $ 1$
$ 23/3$ $ 1$ $ 1$ 0
$ 28/30$ 0 $ 1$ 0
$ 0/9$ $ 1$ 0 0
$ 8/32$ 0 0 0

Suppose that the probability of infection for the $ i$th birth $ (i\leq
251)$ is

$\displaystyle \Pr (y_{i} =1\vert\boldsymbol{x}_{i},\boldsymbol{\beta})=\Phi (\boldsymbol{x}_{i}^{\prime}\boldsymbol{ \beta})\;,$ (3.16)
$\displaystyle \boldsymbol{\beta} \sim N_{4}(0,5\boldsymbol{I}_{4})\;,$ (3.17)

where $ \boldsymbol{x}_{i}=(1,x_{i1},x_{i2},x_{i3})^{\top}$ is the covariate vector, $ \boldsymbol{\beta}=(\beta _{0},\beta
_{1},\beta_{2},\beta _{3})$ is the vector of unknown coefficients, $ \Phi $ is the cdf of the standard normal random variable and $ \boldsymbol{I}_{4}$ is the four-dimensional identity matrix. The target posterior density, under the assumption that the outcomes $ \boldsymbol{y}=(y_{1},y_{2},\ldots,y_{251})$ are conditionally independent, is

$\displaystyle \pi (\boldsymbol{\beta}\vert\boldsymbol{y})\propto \pi (\boldsymb...
...\left(\boldsymbol{x}_{i}^{\top}\boldsymbol{\beta}\right)\right\}^{(1-y_{i})}\;,$    

where $ \pi(\boldsymbol{\beta})$ is the density of the $ N(0,10\boldsymbol{I}_{4})$ distribution.

3.3.2.1 Random Walk Proposal Density

To define the proposal density, let

$\displaystyle \boldsymbol{\hat{\beta}}= (-{1.093022}\;\;{0.607643}\;\;{1.197543}\;\;-{1.904739})^{\top}$    

be the MLE found using the Newton-Raphson algorithm and let

$\displaystyle \boldsymbol{V}=\left( \begin{array}{llll} {0.040745} & -{0.007038...
...{-}{0.062292} & -{0.016803} \\ & & & \hphantom{-}{0.080788} \end{array} \right)$    

be the symmetric matrix obtained by inverting the negative of the Hessian matrix (the matrix of second derivatives) of the log-likelihood function evaluated at  $ \boldsymbol{\hat{\beta}}$. Now generate the proposal values by the random walk:

$\displaystyle \boldsymbol{\beta} = \boldsymbol{\beta}^{(j-1)}+\boldsymbol{\varepsilon}^{(j)}$    
$\displaystyle \boldsymbol{\varepsilon}^{(j)} \sim$   N$\displaystyle _{4}(\boldsymbol{0},\boldsymbol{V}) \;,$ (3.18)

which leads to the original Metropolis method. From a run of $ 5000$ iterations of the algorithm beyond a burn-in of a $ 100$ iterations we get the prior-posterior summary that is reported in Table 3.2, which contains the first two moments of the prior and posterior and the $ {2.5}$th (lower) and $ {97.5}$th (upper) percentiles of the marginal densities of  $ \boldsymbol {\beta }$.


Table 3.2: Caesarean data: Prior-posterior summary based on $ 5000$ draws (beyond a burn-in of $ 100$ cycles) from the random-walk M-H algorithm
  Prior Posterior
  Mean Std dev Mean Std dev Lower Upper
$ \beta_{0}$ $ {0.000}$ $ {3.162}$ $ -{1.110}$ $ {0.224}$ $ -{1.553}$ $ -{0.677}$
$ \beta_{1}$ $ {0.000}$ $ {3.162}$ $ \hphantom{-}{0.612}$ $ {0.254}$ $ \hphantom{-}{0.116}$ $ \hphantom{-}{1.127}$
$ \beta_{2}$ $ {0.000}$ $ {3.162}$ $ \hphantom{-}{1.198}$ $ {0.263}$ $ \hphantom{-}{0.689}$ $ \hphantom{-}{1.725}$
$ \beta_{3}$ $ {0.000}$ $ {3.162}$ $ -{1.901}$ $ {0.275}$ $ -{2.477}$ $ -{1.354}$

As expected, both the first and second covariates increase the probability of infection while the third covariate (the antibiotics prophylaxis) reduces the probability of infection.

To get an idea of the form of the posterior density we plot in Fig. 3.1 the four marginal posterior densities. The density plots are obtained by smoothing the histogram of the simulated values with a Gaussian kernel. In the same plot we also report the autocorrelation functions (correlation against lag) for each of the sampled parameter values. The autocorrelation plots provide information of the extent of serial dependence in the sampled values. Here we see that the serial correlations start out high but decline to almost zero by lag twenty.

Figure 3.2: Caesarean data with random-walk M-H algorithm: Marginal posterior densities (top panel) and autocorrelation plot (bottom panel)
\includegraphics[width=9cm]{text/2-3/fig1press.eps}

3.3.2.2 Tailored Proposal Density


Table 3.3: Caesarean data: Prior-posterior summary based on $ 5000$ draws (beyond a burn-in of $ 100$ cycles) from the tailored M-H algorithm
  Prior Posterior
  Mean Std dev Mean Std dev Lower Upper
$ \beta_{0}$ $ {0.000}$ $ {3.162}$ $ -{1.080}$ $ {0.220}$ $ -{1.526}$ $ -{0.670}$
$ \beta_{1}$ $ {0.000}$ $ {3.162}$ $ \hphantom{-}{0.593}$ $ {0.249}$ $ \hphantom{-}{0.116}$ $ \hphantom{-}{1.095}$
$ \beta_{2}$ $ {0.000}$ $ {3.162}$ $ \hphantom{-}{1.181}$ $ {0.254}$ $ \hphantom{-}{0.680}$ $ \hphantom{-}{1.694}$
$ \beta_{3}$ $ {0.000}$ $ {3.162}$ $ -{1.889}$ $ {0.266}$ $ -{2.421}$ $ -{1.385}$

To see the difference in results, the M-H algorithm is next implemented with a tailored proposal density. In this scheme one utilizes both $ \boldsymbol{\hat{\beta}}$ and $ \boldsymbol{V}$ that were defined above. We let the proposal density be $ f_{T}(\boldsymbol{\beta}\vert\boldsymbol{\hat{\beta}}, \boldsymbol{V},15)$, a multivariate-$ t$ density with fifteen degrees of freedom. This proposal density is similar to the random-walk proposal except that the distribution is centered at the fixed point $ \boldsymbol{\hat{\beta}}$. The prior-posterior summary based on $ 5000$ draws of the M-H algorithm with this proposal density is given in Table 3.3. We see that the marginal posterior moments are similar to those in Table 3.1. The marginal posterior densities are reported in the top panel of Fig. 3.2. These are virtually identical to those computed using the random-walk M-H algorithm. The most notable difference is in the serial correlation plots which decline much more quickly to zero indicating that the algorithm is mixing well. The same information is revealed by the inefficiency factors which are much closer to one than those from the previous algorithm.

The message from this analysis is that the two proposal densities produce similar results, with the differences appearing only in the autocorrelation plots (and inefficiency factors) of the sampled draws.

Figure 3.3: Caesarean data with tailored M-H algorithm: Marginal posterior densities (top panel) and autocorrelation plot (bottom panel)
\includegraphics[width=9cm]{text/2-3/fig2press.eps}

3.3.3 Multiple-Block M-H Algorithm

In applications when the dimension of $ \boldsymbol{\psi}$ is large, it can be difficult to construct a single block M-H algorithm that converges rapidly to the target density. In such cases, it is helpful to break up the variate space into smaller blocks and to then construct a Markov chain with these smaller blocks. Suppose, for illustration, that $ \boldsymbol{\psi}$ is split into two vector blocks $ (\boldsymbol{\psi}_{1},\boldsymbol{\psi}_{2})$. For example, in a regression model, one block may consist of the regression coefficients and the other block may consist of the error variance. Next, for each block, let

$\displaystyle q_{1}(\boldsymbol{\psi}_{1},\boldsymbol{\psi}_{1}^{\prime}\vert\b...
...ldsymbol{\psi}_{2},\boldsymbol{\psi}_{2}^{\prime}\vert\boldsymbol{\psi}_{1})\;,$    

denote the corresponding proposal density. Here each proposal density $ q_{k}$ is allowed to depend on the data and the current value of the remaining block. Also define (by analogy with the single-block case)

$\displaystyle \alpha (\boldsymbol{\psi}_{1},\boldsymbol{\psi}_{1}^{\prime}\vert...
...psi}_{1}, \boldsymbol{\psi}_{1}^{\prime}\vert\boldsymbol{\psi}_{2})}\right\}\;,$ (3.19)

and

$\displaystyle \alpha (\boldsymbol{\psi}_{2},\boldsymbol{\psi}_{2}^{\prime}\vert...
...si} _{2},\boldsymbol{\psi}_{2}^{\prime}\vert\boldsymbol{\psi}_{1})}\right\} \;,$ (3.20)

as the probability of move for block $ \boldsymbol{\psi}_{k}$ $ (k=1,2)$ conditioned on the other block. The conditional densities

$\displaystyle \pi (\boldsymbol{\psi}_{1}\vert\boldsymbol{\psi}_{2})$   and$\displaystyle \quad \pi (\boldsymbol{\psi}_{2}\vert\boldsymbol{\psi}_{1})$    

that appear in these functions are called the full conditional densities. By Bayes theorem each is proportional to the joint density. For example,

$\displaystyle \pi (\boldsymbol{\psi}_{1}\vert\boldsymbol{\psi}_{2})\propto \pi (\boldsymbol{\psi}_{1}, \boldsymbol{\psi}_{2})\;,$    

and, therefore, the probabilities of move in (3.19) and (3.20) can be expressed equivalently in terms of the kernel of the joint posterior density $ \pi(\boldsymbol{\psi}_{1},\boldsymbol{\psi}_{2})$ because the normalizing constant of the full conditional density (the norming constant in the latter expression) cancels in forming the ratio.

With these inputs, one sweep of the multiple-block M-H algorithm is completed by updating each block, say sequentially in fixed order, using a M-H step with the above probabilities of move, given the most current value of the other block.

Algorithm 2 (Multiple-Block Metropolis-Hastings)  
  1. Specify an initial value $ \boldsymbol{\psi}^{(0)} =
\left(\boldsymbol{\psi}_{1}^{(0)},\boldsymbol{\psi}_{2}^{(0)}\right)$:

  2. Repeat for $ j=1,2,\ldots,n_0+M$.

    a)
    Repeat for $ k=1,2$

    I.
    Propose a value for the $ k$th block, conditioned on the previous value of $ k$th block, and the current value of the other block $ \boldsymbol{
\psi}_{-k}$:

    $\displaystyle \boldsymbol{\psi}_{k}^{\prime}\sim q_{k}\left(\boldsymbol{\psi}_{k}^{(j-1)},\cdot\vert\boldsymbol{\psi}_{-k}\right)\;.$    

    II.
    Calculate the probability of move

    $\displaystyle \alpha _{k}\left(\boldsymbol{\psi}_{k}^{(j-1)},\boldsymbol{\psi}_...
...)},\boldsymbol{\psi}_{k}^{\prime}\vert\boldsymbol{ \psi}_{-k}\right)}\right\} .$    

    III.
    Update the $ k$th block as

    $\displaystyle \boldsymbol{\psi}_{k}^{(j)}= \begin{cases}\boldsymbol{\psi}_{k}^{...
...\boldsymbol{\psi}_{k}^{\prime}\vert\boldsymbol{\psi}_{-k}\right) \end{cases}\;.$    

  3. Return the values $ \left\{\boldsymbol{\psi}^{(n_{0}+1)},\boldsymbol{\psi}^{(n_{0}+2)},\ldots,\boldsymbol{\psi}^{(n_{0}+M)}\right\}\,.$

The extension of this method to more than two blocks is straightforward.

The transition kernel of the resulting Markov chain is given by the product of transition kernels

$\displaystyle P(\boldsymbol{\psi},d\boldsymbol{\psi}^{\prime})=\prod_{k=1}^{2}P...
...ol{ \psi}_{k},d\boldsymbol{\psi}_{k}^{\prime}\vert\boldsymbol{\psi}_{-k}\right)$ (3.21)

This transition kernel is not reversible, as can be easily checked, because under fixed sequential updating of the blocks updating in the reverse order never occurs. The multiple-block M-H algorithm, however, satisfies the weaker condition of invariance. To show this, we utilize the fact that each sub-move satisfies local reversibility ([17]) and therefore the transition kernel $ P_{1}(\boldsymbol{\psi}_{1},d\boldsymbol{\psi}
_{1}\vert\boldsymbol{\psi}_{2})$ has $ \pi _{1\vert 2}^{\ast}(\cdot
\vert\boldsymbol{\psi}_{2})$ as its local invariant distribution with density $ \pi _{1\vert 2}^{\ast}(\cdot
\vert\boldsymbol{\psi}_{2})$, i.e.,

$\displaystyle \pi _{1\vert 2}^{\ast}(d\boldsymbol{\psi}_{1}\vert\boldsymbol{\ps...
...}(\boldsymbol{ \psi}_{1}\vert\boldsymbol{ \psi}_{2})\,d\boldsymbol{\psi}_{1}\;.$ (3.22)

Similarly, the conditional transition kernel $ P_{2}(\boldsymbol{\psi}_{2},d \boldsymbol{ \psi}_{2}\vert\boldsymbol{\psi}_{1})$ has $ \pi _{2\vert 1}^{\ast}(\cdot \vert \boldsymbol{\psi}_{1})$ as its invariant distribution, for a given value of $ \boldsymbol{\psi}_{1}$. Then, the kernel formed by multiplying the conditional kernels is invariant for $ \pi ^{\ast}(\cdot ,\cdot )$:

  $\displaystyle \iint P_{1}(\boldsymbol{\psi}_{1},d\boldsymbol{\psi}_{1}^{\prime}...
...si}_{1},\boldsymbol{\psi}_{2})\,d \boldsymbol{\psi}_{1}\,d\boldsymbol{\psi}_{2}$    
  $\displaystyle =\int P_{2}(\boldsymbol{\psi}_{2},d\boldsymbol{\psi}_{2}^{\prime}...
...mbol{ \psi}_{1}\right] \pi _{2}(\boldsymbol{\psi}_{2})\,d\boldsymbol{ \psi}_{2}$    
  $\displaystyle =\int P_{2}(\boldsymbol{\psi}_{2},d\boldsymbol{\psi}_{2}^{\prime}...
...t\boldsymbol{\psi }_{2})\pi _{2}(\boldsymbol{\psi}_{2})\,d\boldsymbol{\psi}_{2}$    
  $\displaystyle =\int P_{2}(\boldsymbol{\psi}_{2},d\boldsymbol{\psi}_{2}^{\prime}...
... \boldsymbol{\psi}_{2})}\pi _{2}(\boldsymbol{\psi}_{2})\,d\boldsymbol{\psi}_{2}$    
  $\displaystyle =\pi _{1}^{\ast}(d\boldsymbol{\psi}_{1}^{\prime})\,\int P_{2}(\bo...
...ldsymbol{ \psi}_{2}\vert\boldsymbol{\psi}_{1}^{\prime})\,d\boldsymbol{\psi}_{2}$    
  $\displaystyle =\pi _{1}^{\ast}(d\boldsymbol{\psi}_{1}^{\prime})\pi _{2\vert 1}^{\ast}(d \boldsymbol{\psi}_{2}^{\prime}\vert\boldsymbol{\psi}_{1}^{\prime})\,$    
  $\displaystyle =\pi ^{\ast}(d\boldsymbol{\psi}_{1}^{\prime},d\boldsymbol{\psi}_{2}^{\prime})\;,$    

where the third line follows from (3.22), the fourth from Bayes theorem, the sixth from assumed invariance of $ P_{2}$, and the last from the law of total probability.

The implication of this result is that it allows us to take draws in succession from each of the kernels, instead of having to run each to convergence for every value of the conditioning variable.

Remark 1   Versions of either random-walk or tailored proposal densities can be used in this algorithm, analogous to the single-block case. For example, [14] determine the proposal densities $ q_{k}$ by tailoring to $ \pi (\boldsymbol{\psi}_{k},\boldsymbol{\psi}_{-k})$ in which case the proposal density is not fixed but varies across iterations. An important special case occurs if each proposal density is taken to be the full conditional density of that block. Specifically, if we set

$\displaystyle q_{1}\left(\boldsymbol{\psi}_{1}^{(j-1)},\boldsymbol{\psi}_{1}^{\...
... _{2}\right)=\pi (\boldsymbol{\psi}_{1}^{\prime}\vert\boldsymbol{\psi}_{2}) \;,$    

and

$\displaystyle q_{2}\left(\boldsymbol{\psi}_{2}^{(j-1)},\boldsymbol{\psi}_{2}^{\...
...} _{1}\right)=\pi (\boldsymbol{\psi}_{2}^{\prime}\vert\boldsymbol{\psi}_{1})\;,$    

then an interesting simplification occurs. The probability of move (for the first block) becomes

$\displaystyle \alpha _{1}\left(\boldsymbol{\psi}_{1}^{(j-1)},\boldsymbol{\psi}_{1}^{\prime}\vert\boldsymbol{ \psi}_{2}\right)$ $\displaystyle =\min \left\{ 1,\frac{\pi \left(\boldsymbol{\psi}_{1}^{\prime}\ve...
...\left(\boldsymbol{\psi} _{1}^{\prime}\vert\boldsymbol{\psi}_{2}\right)}\right\}$    
  $\displaystyle =1 \;,$    

and similarly for the second block, implying that if proposal values are drawn from their full conditional densities then the proposal values are accepted with probability one. This special case of the multiple-block M-H algorithm (in which each block is proposed using its full conditional distribution) is called the Gibbs sampling algorithm.


next up previous contents index
Next: 3.4 The Gibbs Sampling Up: 3. Markov Chain Monte Previous: 3.2 Markov Chains