Gibbs sampling

Sampling and inference tasks

In sampling, we are concerned with how to sample from a target probability distribution . Given samples , we can express a quantity of interest as the expected value of a random variable and then use the estimator to estimate . For example, to estimate the marginal probability , we let . Thus, we see that we can use a MCMC algorithm to draw samples from and then use the samples to estimate quantities of interest.

Gibbs sampling

Last lecture, we described the Metropolis-Hastings algorithm for sampling from a probability distribution by using a proposal distribution and acceptance probabilities for accepting proposed moves. The MH algorithm only required computing the ratios . Note that the MH algorithm treats as a black box and does not leverage any particular structure of . Similarly, the proposal distribution we choose typically does not depend on and also does not leverage any structure of .

Gibbs sampling is a MCMC algorithm that repeatedly samples from the conditional distribution of one variable of the target distribution , given all of the other variables. Gibbs sampling works as follows:

  1. Initialize for
  2. For
    1. Pick index uniformly at random from
    2. Draw a sample where is the set of all variables in except for the variable.
    3. Let .

Gibbs sampling assumes we can compute conditional distributions of one variable conditioned on all of the other variables and sample exactly from these distributions. In graphical models, the conditional distribution of some variable only depends on the variables in the Markov blanket of that node.

Let us now show that Gibbs sampling is a special case of Metropolis-Hastings where the proposed moves are always accepted (the acceptance probability is 1).

Let denote the variable, and let denote the set of all variables except . Let . Let where

Gibbs sampling is used very often in practice since we don’t have to design a proposal distribution. Note that the Gibbs sampling algorithm described earlier is known as random scan Gibbs sampling because we choose an index uniformly at random in each iteration. A common implementation of Gibbs sampling is systematic scan Gibbs sampling where we have a for loop that goes through all of the variables in some order and samples from the conditional distribution of given all of the other variables.

Variants of Gibbs sampling

One variant of Gibbs sampling is blocked Gibbs sampling, where we group two or more variables together in a block and update this block by sampling from the joint distribution of these variables conditioned on all of the other variables. Updating more variables at a time in blocks is helpful. Consider the limiting case where if we could sample from a block containing all of the variables, then we could sample directly from .

Another variant is collapsed Gibbs sampling. In this algorithm, we marginalize out as many variables as possible before sampling from the conditional distribution of some variable.

As an example, suppose the target probability distribution we want to sample from is .

In Gibbs sampling, we would alternately sample and .

In collapsed Gibbs sampling, we would alternately sample and then . Note that in this case, we are drawing samples from the exact distribution .

Note that the ordering of the variables in the sampling procedure is very important for collapsed Gibbs sampling (to ensure that the resulting Markov chain has the right stationary distribution) since the right ordering might depend on which variables we marginalize out.

Simulated annealing

Simulated annealing is an adaptation of the Metropolis-Hastings algorithm and is a heuristic for finding the global maximum of a given function. Consider the case of SAT where we are given a CNF formula and want to find a satisfying assignment. Simulated annealing moves around the space trying to find assignments that satisfy as many clauses as possible.

We construct a probability distribution that puts high probability on assignments that satisfy many clauses. Let where is the number of clauses satisfied by and is the “temperature” parameter. As , approaches the uniform distribution. As , tends to put all of the probability mass on satisfying assignments.

To solve the optimization problem, we want to sample from this distribution when is small. We can use Metropolis-Hastings with a proposal distribution that randomly picks a variable and flips it. Let be equal to if differ in only one variable and otherwise. The Metropolis-Hastings acceptance probability is . If , then we always accept the transition. controls how greedy we are.

The algorithm starts with a large in order to move around the space freely and decreases slowly to concentrate probability mass on the states which satisfy the most clauses. is decreased based on some annealing schedule. Each iteration is otherwise a MH update.

Gibbs sampling - Volodymyr Kuleshov