National Centre of Competence in Research PlanetS
Gesellschaftsstrasse 6 | 3012 Bern | Switzerland
  +41 31 684 32 39

Basic MCMC: implementing the Metropolis algorithm.

Lecturer: Daniel Mortlock (Imperial College London)

Lecture abstract.

Bayesian parameter inference consists of calculating Pr(theta | data, prior), the posterior distribution of the parameters, theta, conditional on whatever data is available (and some prior knowledge). In many situations it is hard to guess from the data what values of parameters are favoured, i.e., where the function Pr(theta | data, prior) peaks and how wide the peak(s) is/are. One generic way around this is to use a set of samples from this posterior distribution, {theta_i}, in place of the function itself, but some algorithm is needed to generate these samples. A simple, but very general, option is the Metropolis algorithm, one of the most basic forms of Markov chain Monte Carlo (MCMC). The Metropolis algorithm is a guided random walk through parameter space that focuses on the theta values for which the likelihood is highest and can produce samples from the posterior given only the ability to evaluate a function that is proportional to Pr(theta | data, prior), the standard option being the product of the likelihood, Pr(data | theta) and the prior, Pr(theta | prior).

This lecture will describe the basics of the Metropolis algorithm, giving all the ingredients necessary to implement it and do full Bayesian parameter estimation for “moderate” problems (particularly where there is just a single peak in the posterior and there are less than ten parameters). For more challenging problems it will generally be necessary to use some more bespoke software packages, or other algorithms like Hamiltonian Monte Carlo and nested sampling (some of which are covered in other lectures at this school).

Categories: hidden

Do you like what you see ? Share it!

Share Tweet Share Save Share Email