Baggerly (1998) showed that empirical likelihood is the only member in the Cressie-Read power divergence family to be Bartlett correctable. This paper strengthens Baggerly’s result by showing that in a generalized class of the power divergence family, which includes the Cressie-Read family and other nonparametric likelihood such as Schennach’s (2005, 2007) exponentially tilted empirical likelihood, empirical likelihood is still the only member to be Bartlett correctable.
This paper studies robustness of bootstrap inference methods for instrumental variable regression models. In particular, we compare the uniform weight and implied probability bootstrap approximations for parameter hypothesis test statistics by applying the breakdown point theory, which focuses on behaviors of the bootstrap quantiles when outliers take arbitrarily large values. The implied probabilities are derived from an information theoretic projection from the empirical distribution to a set of distributions satisfying orthogonality conditions for instruments. Our breakdown point analysis considers separately the eﬀects of outliers in dependent variables, endogenous regressors, and instruments, and clariﬁes the situations where the implied probability bootstrap can be more robust than the uniform weight bootstrap against outliers. Eﬀects of tail trimming introduced by Hill and Renault (2010) are also analyzed. Several simulation studies illustrate our theoretical ﬁndings.
This paper studies robustness of bootstrap inference methods under moment conditions. In particular, we compare the uniform weight and implied probability bootstraps by analyzing behaviors of the bootstrap quantiles when outliers take arbitrarily large values, and derive the breakdown points for those bootstrap quantiles. The breakdown point properties characterize the situation where the implied probability bootstrap is more robust than the uniform weight bootstrap against outliers. Simulation studies illustrate our theoretical ﬁndings.