Skip to main content

Yixiao Sun Publications

Publish Date
Abstract

Using the power kernels of Phillips, Sun and Jin (2006, 2007), we examine the large sample asymptotic properties of the t-test for different choices of power parameter (τ). We show that the nonstandard fixed-τ limit distributions of the t-statistic provide more accurate approximations to the finite sample distributions than the conventional large-τ limit distribution. We prove that the second-order corrected critical value based on an asymptotic expansion of the nonstandard limit distribution is also second-order correct under the large-τ asymptotics. As a further contribution, we propose a new practical procedure for selecting the test-optimal power parameter that addresses the central concern of hypothesis testing: the selected power parameter is test-optimal in the sense that it minimizes the type II error while controlling for the type I error. A plug-in procedure for implementing the test-optimal power parameter is suggested. Simulations indicate that the new test is as accurate in size as the nonstandard test of Kiefer and Vogelsang (2002a, 2002b; KV), and yet it does not incur the power loss that often hurts the performance of the latter test. The new test therefore combines the advantages of the KV test and the standard (MSE optimal) HAC test while avoiding their main disadvantages (power loss and size distortion, respectively). The results complement recent work by Sun, Phillips and Jin (2008) on conventional and bT HAC testing.

Abstract

In time series regression with nonparametrically autocorrelated errors, it is now standard empirical practice to construct confidence intervals for regression coefficients on the basis of nonparametrically studentized t-statistics. The standard error used in the studentization is typically estimated by a kernel method that involves some smoothing process over the sample autocovariances. The underlying parameter (M) that controls this tuning process is a bandwidth or truncation lag and it plays a key role in the finite sample properties of tests and the actual coverage properties of the associated confidence intervals. The present paper develops a bandwidth choice rule for M that optimizes the coverage accuracy of interval estimators in the context of linear GMM regression. The optimal bandwidth balances the asymptotic variance with the asymptotic bias of the robust standard error estimator. This approach contrasts with the conventional bandwidth choice rule for nonparametric estimation where the focus is the nonparametric quantity itself and the choice rule balances asymptotic variance with squared asymptotic bias. It turns out that the optimal bandwidth for interval estimation has a different expansion rate and is typically substantially larger than the optimal bandwidth for point estimation of the standard errors. The new approach to bandwidth choice calls for refined asymptotic measurement of the coverage probabilities, which are provided by means of an Edgeworth expansion of the finite sample distribution of the nonparametrically studentized t-statistic. This asymptotic expansion extends earlier work and is of independent interest. A simple plug-in procedure for implementing this optimal bandwidth is suggested and simulations confirm that the new plug-in procedure works well in finite samples. Issues of interval length and false coverage probability are also considered, leading to a secondary approach to bandwidth selection with similar properties.

Abstract

In time series regressions with nonparametrically autocorrelated errors, it is now standard empirical practice to use kernel-based robust standard errors that involve some smoothing function over the sample autocorrelations. The underlying smoothing parameter b, which can be defined as the ratio of the bandwidth (or truncation lag) to the sample size, is a tuning parameter that plays a key role in determining the asymptotic properties of the standard errors and associated semiparametric tests. Small-b asymptotics involve standard limit theory such as standard normal or chi-squared limits, whereas fixed-b asymptotics typically lead to nonstandard limit distributions involving Brownian bridge functionals. The present paper shows that the nonstandard fixed-b limit distributions of such nonparametrically studentized tests provide more accurate approximations to the finite sample distributions than the standard small-b limit distribution. In particular, using asymptotic expansions of both the finite sample distribution and the nonstandard limit distribution, we confirm that the second-order corrected critical value based on the expansion of the nonstandard limiting distribution is also second-order correct under the standard small-b asymptotics. We further show that, for typical economic time series, the optimal bandwidth that minimizes a weighted average of type I and type II errors is larger by an order of magnitude than the bandwidth that minimizes the asymptotic mean squared error of the corresponding long-run variance estimator. A plug-in procedure for implementing this optimal bandwidth is suggested and simulations confirm that the new plug-in procedure works well in finite samples.

Keywords: Asymptotic expansion, Bandwidth choice, Kernel method, Long-run variance, Loss function, Nonstandard asymptotics, Robust standard error, Type I and Type II errors

JEL Classification: C13; C14; C22; C51

Journal of Econometrics
Abstract

This paper studies fractional processes that may be perturbed by weakly dependent time series. The model for a perturbed fractional process has a components framework in which there may be components of both long and short memory. All commonly used estimates of the long memory parameter (such as log periodogram (LP) regression) may be used in a components model where the data are affected by weakly dependent perturbations, but these estimates can suffer from serious downward bias. To circumvent this problem, the present paper proposes a new procedure that allows for the possible presence of additive perturbations in the data. The new estimator resembles the LP regression estimator but involves an additional (nonlinear) term in the regression that takes account of possible perturbation effects in the data. Under some smoothness assumptions at the origin, the bias of the new estimator is shown to disappear at a faster rate than that of the LP estimator, while its asymptotic variance is inflated only by a multiplicative constant. In consequence, the optimal rate of convergence to zero of the asymptotic MSE of the new estimator is faster than that of the LP estimator. Some simulation results demonstrate the viability and the bias-reducing feature of the new estimator relative to the LP estimator in finite samples. A test for the presence of perturbations in the data is given.

Keywords: Asymptotic bias; Asymptotic normality; Bias reduction; Fractional components model; Perturbed fractional process; Rate of convergence; Testing perturbations

JEL Classification: C13; C14; C22; C51