The marginal likelihood or the model evidence is the probability of observing the data given a specific model. This is used in Bayesian model selection and comparison when computing Bayes factor between models, which is simply the ratio of the two respective marginal likelihoods. This can be used to select which covariates to include in a linear model for describing data. Consider the usual normal-inverse gamma prior specification on the regression parameters and variance.

## Normal Inverse-Gamma Priors

Now construct the joint, out of which the regression coefficients and variance will be integrated.

This can be made neater by rearrangement. Consider first the two exponentials above. The exponents can be combined and rewritten in such a manner that a normal kernel in Beta is recognizable.

Define:

then the above can written as:

where the last line was achieved by completing the square.

Recognizing the kernel of a normal for Beta and knowing the normalizing constant the integral over Beta is now simple. Integrating out Beta yields:

Following a similar strategy as that for Beta, this can be rearranged into something containing a Gamma kernel for sigma squared.

Define

Then the above reads

Recognizing the kernel of a Gamma makes the integration easy, which yields the final result for the marginal likelihood below.

## Marginal Likelihood/Model Evidence

An example application of this can be found here in linear model selection.