In the unconditional moment restriction model of Hansen (1982), specification tests and more efficient estimators are both available whenever the number of moment restrictions exceeds the number of parameters of interest. We show a similar relationship between potential refutability of a model and existence of more efficient estimators is present in much broader settings. Specifically, a condition we name local overidentification is shown to be equivalent to both the existence of specification tests with nontrivial local power and the existence of more efficient estimators of some “smooth” parameters in general semi/nonparametric models. Under our notion of local overidentification, various locally nontrivial specification tests such as Hausman tests, incremental Sargan tests (or optimally weighted quasi-likelihood ratio tests) naturally extend to general semi/nonparametric settings. We further obtain simple characterizations of local overidentification for general models of nonparametric conditional moment restrictions with possibly different conditioning sets. The results are applied to determining when semi/nonparametric models with endogeneity are locally testable, and when nonparametric plug-in and semiparametric two-step GMM estimators are semiparametrically efficient. Examples of empirically relevant semi/nonparametric structural models are presented.
In this paper we make two contributions. First, we show by example that empirical likelihood and other commonly used tests for parametric moment restrictions, including the GMM-based J-test of Hansen (1982), are unable to control the rate at which the probability of a Type I error tends to zero. From this it follows that, for the optimality claim for empirical likelihood in Kitamura (2001) to hold, additional assumptions and qualifications need to be introduced. The example also reveals that empirical and parametric likelihood may have non-negligible differences for the types of properties we consider, even in models in which they are first-order asymptotically equivalent. Second, under stronger assumptions than those in Kitamura (2001), we establish the following optimality result: (i) empirical likelihood controls the rate at which the probability of a Type I error tends to zero and (ii) among all procedures for which the probability of a Type I error tends to zero at least as fast, empirical likelihood maximizes the rate at which probability of a Type II error tends to zero for “most” alternatives. This result further implies that empirical likelihood maximizes the rate at which probability of a Type II error tends to zero for all alternatives among a class of tests that satisfy a weaker criterion for their Type I error probabilities.