Publication Date: May 2013
Parametric mixture models are commonly used in applied work, especially empirical economics, where these models are often employed to learn for example about the proportions of various types in a given population. This paper examines the inference question on the proportions (mixing probability) in a simple mixture model in the presence of nuisance parameters when sample size is large. It is well known that likelihood inference in mixture models is complicated due to 1) lack of point identiﬁcation, and 2) parameters (for example, mixing probabilities) whose true value may lie on the boundary of the parameter space. These issues cause the proﬁled likelihood ratio (PLR) statistic to admit asymptotic limits that diﬀer discontinuously depending on how the true density of the data approaches the regions of singularities where there is lack of point identiﬁcation. This lack of uniformity in the asymptotic distribution suggests that conﬁdence intervals based on pointwise asymptotic approximations might lead to faulty inferences. This paper examines this problem in details in a ﬁnite mixture model and provides possible ﬁxes based on the parametric bootstrap. We examine the performance of this parametric bootstrap in Monte Carlo experiments and apply it to data from Beauty Contest experiments. We also examine small sample inferences and projection methods.
Finite mixtures, Parametric bootstrap, Proﬁled likelihood ratio statistic, Partial identiﬁcation, Parameter on the boundary
See CFP: 1437