Markov chain Monte Carlo sampling

Markov Chain Monte Carlo (MCMC) methods

Metropolis-Hastings algorithm

The Metropolis-Hastings algorithm

The Metropolis-Hastings algorithm

The Metropolis-Hastings algorithm creates a set of samples \(x\) such that the distribution of the samples approaches the goal distribution.


The algorithm takes an arbitrary starting sample \(x_0\). It then must decide which sample to consider next.


It does this using a Markov chain. That is, there is a map \(g(x_j, x_i)\).

This distribution is generally a normal distribution around \(x_i\), making the process a random walk.


Now we have a considered sample, we can either accept or reject it. It is this step that makes the end distribution approximage the function.

We accept if \(\dfrac{f(x_j)}{f(x_i)}>u\), where \(u\) is a random variable between \(0\) and \(1\), generated each time.

We can calculate this because we know this function.


Gibb’s sampling

Gibb’s sampling


As with Metropolis-Hastings, we want to generate samples for \(P(X)\) and use this to approximate its form.

We do this by using the conditional distribution. If \(X\) is a vector then we also have:


We use our knowledge of this distribution.

Start with vector \(x_0\).

This has components \(x_{0,j}\)

To form the next vector \(x_1\) we loop through each component.


We use this to form \(x_{1,0}\)

However after th the first component we update this so it uses the updated variables.


This means we only need to know the conditional distributions.