Date of Award
Master of Science (MS)
Peter D. M. Macdonald
Mixture distributions are typically used to model data in which each observation belongs to one of some number of different groups. They also provide a convenient and flexible class of models for density estimation. When the number of components k is assumed known, the Gibbs sampler can be used for Bayesian estimation of the component parameters. We present the implementation of the Gibbs sampler for mixtures of Normal distributions and show that, spurious modes can be avoided by introducing a Gamma prior in the Kiefer-Wolfowitz example.
Adopting a Bayesian approach for mixture models has certain advantages; it is not without its problems. One typical problem associated with mixtures is nonidentifiability of the Gomponent parameters. This causes label switching in the Gibbs sampler output and makes inference for the individual components meaningless. We show that the usual approach to this problem by imposing simple identifiability constraints on the mixture parameters is sometimes inadequate, and present an alternative approach by arranging the mixture components in order of non-decreasing means whilst choosing priors that are slightly more informative. We illustrate the success of our approach on the fishery example.
When the number of components k is considered unknown, more sophisticated methods are required to perform the Bayesian analysis. One method is the Reversible Jump MCMC algorithm described by Richardson and Green (1997), which they applied to univariate Normal mixtures. Alternatively, selection of k can be based on a comparison of models fitted with different numbers of components by some joint measures of model fit and model complexity. We review these methods and illustrate how to use them to compare competing mixture models using the acidity data.
We conclude with some suggestions for further research.
Liu, Zhihui, "Bayesian Mixture Models" (2010). Open Access Dissertations and Theses. Paper 4499.
McMaster University Library