Skip to main content

Patrik Guggenberger Publications

Publish Date
Abstract

This paper introduces a new confidence interval (CI) for the autoregressive parameter (AR) in an AR(1) model that allows for conditional heteroskedasticity of general form and AR parameters that are less than or equal to unity. The CI is a modification of Mikusheva’s (2007a) modification of Stock’s (1991) CI that employs the least squares estimator and a heteroskedasticity-robust variance estimator. The CI is shown to have correct asymptotic size and to be asymptotically similar (in a uniform sense). It does not require any tuning parameters. No existing procedures have these properties. Monte Carlo simulations show that the CI performs well in finite samples in terms of coverage probability and average length, for innovations with and without conditional heteroskedasticity.

Abstract

This paper provides a set of results that can be used to establish the asymptotic size and/or similarity in a uniform sense of confidence sets and tests. The results are generic in that they can be applied to a broad range of problems. They are most useful in scenarios where the pointwise asymptotic distribution of a test statistic has a discontinuity in its limit distribution.

The results are illustrated in three examples. These are: (i) the conditional likelihood ratio test of Moreira (2003) for linear instrumental variables models with instruments that may be weak, extended to the case of heteroskedastic errors; (ii) the grid bootstrap confidence interval of Hansen (1999) for the sum of the AR coefficients in a k-th order autoregressive model with unknown innovation distribution, and (iii) the standard quasi-likelihood ratio test in a nonlinear regression model where identification is lost when the coefficient on the nonlinear regressor is zero.

Discussion Paper
Abstract

This paper introduces a new identification- and singularity-robust conditional quasi-likelihood ratio (SR-CQLR) test and a new identification- and singularity-robust Anderson and Rubin (1949) (SR-AR) test for linear and nonlinear moment condition models. Both tests are very fast to compute. The paper shows that the tests have correct asymptotic size and are asymptotically similar (in a uniform sense) under very weak conditions. For example, in i.i.d. scenarios, all that is required is that the moment functions and their derivatives have 2+γ bounded moments for some γ>0. No conditions are placed on the expected Jacobian of the moment functions, on the eigenvalues of the variance matrix of the moment functions, or on the eigenvalues of the expected outer product of the (vectorized) orthogonalized sample Jacobian of the moment functions.

The SR-CQLR test is shown to be asymptotically efficient in a GMM sense under strong and semi-strong identification (for all k≥p, where k and p are the numbers of moment conditions and parameters, respectively). The SR-CQLR test reduces asymptotically to Moreira’s CLR test when p=1 in the homoskedastic linear IV model. The same is true for p≥2 in most, but not all, identification scenarios.

We also introduce versions of the SR-CQLR and SR-AR tests for subvector hypotheses and show that they have correct asymptotic size under the assumption that the parameters not under test are strongly identified. The subvector SR-CQLR test is shown to be asymptotically efficient in a GMM sense under strong and semi-strong identification.

Abstract

This paper considers a first-order autoregressive model with conditionally heteroskedastic innovations. The asymptotic distributions of least squares (LS), infeasible generalized least squares (GLS), and feasible GLS estimators and t statistics are determined. The GLS procedures allow for misspecification of the form of the conditional heteroskedasticity and, hence, are referred to as quasi-GLS procedures. The asymptotic results are established for drifting sequences of the autoregressive parameter and the distribution of the time series of innovations. In particular, we consider the full range of cases in which the autoregressive parameter rhon satisfies (i) n(1 – ρn) → ∞ and (ii) n(1 – ρn) approaching h in [0,∞) as n → ∞, where n is the sample size. Results of this type are needed to establish the uniform asymptotic properties of the LS and quasi-GLS statistics.

Abstract

This paper considers a first-order autoregressive model with conditionally heteroskedastic innovations. The asymptotic distributions of least squares (LS), infeasible generalized least squares (GLS), and feasible GLS estimators and t statistics are determined. The GLS procedures allow for misspecification of the form of the conditional heteroskedasticity and, hence, are referred to as quasi-GLS procedures. The asymptotic results are established for drifting sequences of the autoregressive parameter and the distribution of the time series of innovations. In particular, we consider the full range of cases in which the autoregressive parameter ρn satisfies (i) n(1 - ρn) → ∞ and (ii) n(1 - ρn) approaches h1 in [0, ∞) as n → ∞, where n is the sample size. Results of this type are needed to establish the uniform asymptotic properties of the LS and quasi-GLS statistics.

Abstract

The topic of this paper is inference in models in which parameters are defined by moment inequalities and/or equalities. The parameters may or may not be identified. This paper introduces a new class of confidence sets and tests based on generalized moment selection (GMS). GMS procedures are shown to have correct asymptotic size in a uniform sense and are shown not to be asymptotically conservative.

The power of GMS tests is compared to that of subsampling, m out of n bootstrap, and “plug-in asymptotic” (PA) tests. The latter three procedures are the only general procedures in the literature that have been shown to have correct asymptotic size in a uniform sense for the moment inequality/equality model. GMS tests are shown to have asymptotic power that dominates that of subsampling, m out of n bootstrap, and PA tests. Subsampling and m out of n bootstrap tests are shown to have asymptotic power that dominates that of PA tests.

Journal of Econometrics
Abstract

This paper analyzes the properties of subsampling, hybrid subsampling, and size-correction methods in two non-regular models. The latter two procedures are introduced in Andrews and Guggenberger (2005b). The models are non-regular in the sense that the test statistics of interest exhibit a discontinuity in their limit distribution as a function of a parameter in the model. The first model is a linear instrumental variables (IV) model with possibly weak IVs estimated using two-stage least squares (2SLS). In this case, the discontinuity occurs when the concentration parameter is zero. The second model is a linear regression model in which the parameter of interest may be near a boundary. In this case, the discontinuity occurs when the parameter is on the boundary.

The paper shows that in the IV model one-sided and equal-tailed two-sided subsampling tests and confidence intervals (CIs) based on the 2SLS t statistic do not have correct asymptotic size. This holds for both fully- and partially-studentized t statistics. But, subsampling procedures based on the partially-studentized t statistic can be size-corrected. On the other hand, symmetric two-sided subsampling tests and CIs are shown to have (essentially) correct asymptotic size when based on a partially-studentized t statistic.

Furthermore, all types of hybrid subsampling tests and CIs are shown to have correct asymptotic size in this model. The above results are consistent with “impossibility” results of Dufour (1997) because subsampling and hybrid subsampling CIs are shown to have infinite length with positive probability.

Subsampling CIs for a parameter that may be near a lower boundary are shown to have incorrect asymptotic size for upper one-sided and equal-tailed and symmetric two-sided CIs. Again, size-correction is possible. In this model as well, all types of hybrid subsampling CIs are found to have correct asymptotic size.

Keywords: Asymptotic size, Finite-sample size, Hybrid test, Instrumental variable, Over-rejection, Parameter near boundary, Size correction, Subsampling confidence interval, Subsampling test, Weak instrument

JEL Classification Numbers: C12, C15

Abstract

This paper considers inference based on a test statistic that has a limit distribution that is discontinuous in a nuisance parameter or the parameter of interest. The paper shows that subsample, bn < n bootstrap, and standard fixed critical value tests based on such a test statistic often have asymptotic size — defined as the limit of the finite-sample size — that is greater than the nominal level of the tests. We determine precisely the asymptotic size of such tests under a general set of high-level conditions that are relatively easy to verify. The high-level conditions are verified in several examples. Analogous results are established for confidence intervals.

The results apply to tests and confidence intervals (i) when a parameter may be near a boundary, (ii) for parameters defined by moment inequalities, (iii) based on super-efficient or shrinkage estimators, (iv) based on post-model selection estimators, (v) in scalar and vector autoregressive models with roots that may be close to unity, (vi) in models with lack of identification at some point(s) in the parameter space, such as models with weak instruments and threshold autoregressive models, (vii) in predictive regression models with nearly-integrated regressors, (viii) for non-differentiable functions of parameters, and (ix) for differentiable functions of parameters that have zero first-order derivative.

Examples (i)-(iii) are treated in this paper. Examples (i) and (iv)-(vi) are treated in sequels to this paper, Andrews and Guggenberger (2005a, b). In models with unidentified parameters that are bounded by moment inequalities, i.e., example (ii), certain subsample confidence regions are shown to have asymptotic size equal to their nominal level. In all other examples listed above, some types of subsample procedures do not have asymptotic size equal to their nominal level.

Keywords: Asymptotic size, b < n bootstrap, Finite-sample size, Over-rejection, Size correction, Subsample confidence interval, Subsample test

JEL Classification: C12, C15

Abstract

This paper considers the problem of constructing tests and confidence intervals (CIs) that have correct asymptotic size in a broad class of non-regular models. The models considered are non-regular in the sense that standard test statistics have asymptotic distributions that are discontinuous in some parameters. It is shown in Andrews and Guggenberger (2005a) that standard fixed critical value, subsample, and b < n bootstrap methods often have incorrect size in such models. This paper introduces general methods of constructing tests and CIs that have correct size. First, procedures are introduced that are a hybrid of subsample and fixed critical value methods. The resulting hybrid procedures are easy to compute and have correct size asymptotically in many, but not all, cases of interest. Second, the paper introduces size-correction and “plug-in” size-correction methods for fixed critical value, subsample, and hybrid tests. The paper also introduces finite-sample adjustments to the asymptotic results of Andrews and Guggenberger (2005a) for subsample and hybrid methods and employs these adjustments in size-correction.

The paper discusses several examples in detail. The examples are: (i) tests when a nuisance parameter may be near a boundary, (ii) CIs in an autoregressive model with a root that may be close to unity, and (iii) tests and CIs based on a post-conservative model selection estimator.

Keywords: Asymptotic size, Autoregressive model, b < n bootstrap, Finite-sample size, Hybrid test, Model selection, Over-rejection, Parameter near boundary, Size correction, Subsample confidence interval, Subsample test

JEL Classification: C12, C15

Journal of Time Series Analysis
Abstract

This paper considers a mean zero stationary first-order autoregressive (AR) model. It is shown that the least squares estimator and t statistic have Cauchy and standard normal asymptotic distributions, respectively, when the AR parameter ρn is very near to one in the sense that 1 – ρn = (n–1).

Keywords: Asymptotics, Least squares, Nearly nonstationary, Stationary initial condition, Unit root

JEL Classification Number: C22

Econometrica
Abstract

The widely used log-periodogram regression estimator of the long-memory parameter d proposed by Geweke and Porter-Hudak (1983) (GPH) has been criticized because of its finite-sample bias, see Agiakloglou, Newbold, and Wohar (1993). In this paper, we propose a simple bias-reduced log-periodogram regression estimator, ^dr, that eliminates the first- and higher-order biases of the GPH estimator. The bias-reduced estimator is the same as the GPH estimator except that one includes frequencies to the power 2k for k = 1,…,r, for some positive integer r, as additional regressors in the pseudo-regression model that yields the GPH estimator. The reduction in bias is obtained using assumptions on the spectrum only in a neighborhood of the zero frequency, which is consistent with the semiparametric nature of the long-memory model under consideration.

Following the work of Robinson (1995b) and Hurvich, Deo, and Brodsky (1998), we establish the asymptotic bias, variance, and mean-squared error (MSE) of ^dr, determine the MSE optimal choice of the number of frequencies, m, to include in the regression, and establish the asymptotic normality of ^dr. These results show that the bias of ^dr goes to zero at a faster rate than that of the GPH estimator when the normalized spectrum at zero is sufficiently smooth, but that its variance only is increased by a multiplicative constant. In consequence, the optimal rate of convergence to zero of the MSE of ^dr is faster than that of the GPH estimator.

We establish the optimal rate of convergence of a minimax risk criterion for estimators of d when the normalized spectral density is in a class that includes those that are smooth of order s > 1 at zero. We show that the bias-reduced estimator ^dr attains this rate when r > (s-2)/2 and m is chosen appropriately. For s > 2, the GPH estimator does not attain this rate. The proof of these results uses results of Giraitis, Robinson, and Samarov (1997).

Some Monte Carlo simulation results for stationary Gaussian ARFIMA(1,d,1) models show that the bias-reduced estimators perform well relative to the standard log-periodogram estimator.

Keywords: Asymptotic bias, asymptotic normality, bias reduction, frequency domain, long-range dependence, optimal rate, rate of convergence, strongly dependent time series

JEL Classification: C13, C14, C22