Skip to main content

Sainan Jin Publications

Publish Date
Abstract

We analyze trend elimination methods and business cycle estimation by data filtering of the type introduced by Whittaker (1923) and popularized in economics in a particular form by Hodrick and Prescott (1980/1997; HP). A limit theory is developed for the HP filter for various classes of stochastic trend, trend break, and trend stationary data. Properties of the filtered series are shown to depend closely on the choice of the smoothing parameter (λ). For instance, when λ = O(n4) where n is the sample size, and the HP filter is applied to an I(1) process, the filter does not remove the stochastic trend in the limit as n → ∞. Instead, the filter produces a smoothed Gaussian limit process that is differentiable to the 4’th order. The residual ‘cyclical’ process has the random wandering non-differentiable characteristics of Brownian motion, thereby explaining the frequently observed ‘spurious cycle’ effect of the HP filter. On the other hand, when λ = o(n), the filter reproduces the limit Brownian motion and eliminates the stochastic trend giving a zero ‘cyclical’ process. Simulations reveal that the λ = O(n4) limit theory provides a good approximation to the actual HP filter for sample sizes common in practical work. When it is used as a trend removal device, the HP filter therefore typically fails to eliminate stochastic trends, contrary to what is now standard belief in applied macroeconomics. The findings are related to recent public debates about the long run effects of the global financial crisis.

Abstract

We propose new tests of the martingale hypothesis based on generalized versions of the Kolmogorov-Smirnov and Cramér-von Mises tests. The tests are distribution free and allow for a weak drift in the null model. The methods do not require either smoothing parameters or bootstrap resampling for their implementation and so are well suited to practical work. The paper develops limit theory for the tests under the null and shows that the tests are consistent against a wide class of nonlinear, non-martingale processes. Simulations show that the tests have good finite sample properties in comparison with other tests particularly under conditional heteroskedasticity and mildly explosive alternatives. An empirical application to major exchange rate data finds strong evidence in favor of the martingale hypothesis, confirming much earlier research.

Abstract

Using the power kernels of Phillips, Sun and Jin (2006, 2007), we examine the large sample asymptotic properties of the t-test for different choices of power parameter (τ). We show that the nonstandard fixed-τ limit distributions of the t-statistic provide more accurate approximations to the finite sample distributions than the conventional large-τ limit distribution. We prove that the second-order corrected critical value based on an asymptotic expansion of the nonstandard limit distribution is also second-order correct under the large-τ asymptotics. As a further contribution, we propose a new practical procedure for selecting the test-optimal power parameter that addresses the central concern of hypothesis testing: the selected power parameter is test-optimal in the sense that it minimizes the type II error while controlling for the type I error. A plug-in procedure for implementing the test-optimal power parameter is suggested. Simulations indicate that the new test is as accurate in size as the nonstandard test of Kiefer and Vogelsang (2002a, 2002b; KV), and yet it does not incur the power loss that often hurts the performance of the latter test. The new test therefore combines the advantages of the KV test and the standard (MSE optimal) HAC test while avoiding their main disadvantages (power loss and size distortion, respectively). The results complement recent work by Sun, Phillips and Jin (2008) on conventional and bT HAC testing.

Abstract

In time series regressions with nonparametrically autocorrelated errors, it is now standard empirical practice to use kernel-based robust standard errors that involve some smoothing function over the sample autocorrelations. The underlying smoothing parameter b, which can be defined as the ratio of the bandwidth (or truncation lag) to the sample size, is a tuning parameter that plays a key role in determining the asymptotic properties of the standard errors and associated semiparametric tests. Small-b asymptotics involve standard limit theory such as standard normal or chi-squared limits, whereas fixed-b asymptotics typically lead to nonstandard limit distributions involving Brownian bridge functionals. The present paper shows that the nonstandard fixed-b limit distributions of such nonparametrically studentized tests provide more accurate approximations to the finite sample distributions than the standard small-b limit distribution. In particular, using asymptotic expansions of both the finite sample distribution and the nonstandard limit distribution, we confirm that the second-order corrected critical value based on the expansion of the nonstandard limiting distribution is also second-order correct under the standard small-b asymptotics. We further show that, for typical economic time series, the optimal bandwidth that minimizes a weighted average of type I and type II errors is larger by an order of magnitude than the bandwidth that minimizes the asymptotic mean squared error of the corresponding long-run variance estimator. A plug-in procedure for implementing this optimal bandwidth is suggested and simulations confirm that the new plug-in procedure works well in finite samples.

Keywords: Asymptotic expansion, Bandwidth choice, Kernel method, Long-run variance, Loss function, Nonstandard asymptotics, Robust standard error, Type I and Type II errors

JEL Classification: C13; C14; C22; C51

Abstract

Employing power kernels suggested in earlier work by the authors (2003), this paper shows how to refine methods of robust inference on the mean in a time series that rely on families of untruncated kernel estimates of the long-run parameters. The new methods improve the size properties of heteroskedastic and autocorrelation robust (HAR) tests in comparison with conventional methods that employ consistent HAC estimates, and they raise test power in comparison with other tests that are based on untruncated kernel estimates. Large power parameter (ρ) asymptotic expansions of the nonstandard limit theory are developed in terms of the usual limiting chi-squared distribution, and corresponding large sample size and large ρ asymptotic expansions of the finite sample distribution of Wald tests are developed to justify the new approach. Exact finite sample distributions are given using operational techniques. The paper further shows that the optimal ρ that minimizes a weighted sum of type I and II errors has an expansion rate of at most O(T1/2) and can even be O(1) for certain loss functions, and is therefore slower than the O(T2/3) rate which minimizes the asymptotic mean squared error of the corresponding long run variance estimator. A new plug-in procedure for implementing the optimal rho is suggested. Simulations show that the new plug-in procedure works well in finite samples.

JEL Classification: C13; C14; C22; C51

Keywords: Asymptotic expansion, consistent HAC estimation, data-determined kernel estimation, exact distribution, HAR inference, large rho asymptotics, long run variance, loss function, power parameter, sharp origin kernel

Journal of Econometrics
Abstract

We correct the limit theory presented in an earlier paper by Hu and Phillips (Journal of Econometrics, 2004) for nonstationary time series discrete choice models with multiple choices and thresholds. The new limit theory shows that, in contrast to the binary choice model with nonstationary regressors and a zero threshold where there are dual rates of convergence (n1/4 and n3/4), all parameters including the thresholds converge at the rate n3/4. The presence of non-zero thresholds therefore materially affects rates of convergence. Dual rates of convergence reappear when stationary variables are present in the system. Some simulation evidence is provided, showing how the magnitude of the thresholds affects finite sample performance. A new finding is that predicted probabilities and marginal effect estimates have finite sample distributions that manifest a pile-up, or increasing density, towards the limits of the domain of definition.

Keywords: Brownian motion, Brownian local time, Discrete choices, Integrated processes, Pile-up problem, Threshold parameters

JEL Classification: C23, C25

Abstract

A new class of kernel estimates is proposed for long run variance (LRV) and heteroskedastic autocorrelation consistent (HAC) estimation. The kernels are called steep origin kernels and are related to a class of sharp origin kernels explored by the authors (2003) in other work. They are constructed by exponentiating a mother kernel (a conventional lag kernel that is smooth at the origin) and they can be used without truncation or bandwidth parameters. When the exponent is passed to infinity with the sample size, these kernels produce consistent LRV/HAC estimates. The new estimates are shown to have limit normal distributions, and formulae for the asymptotic bias and variance are derived. With steep origin kernel estimation, bandwidth selection is replaced by exponent selection and data-based selection is possible. Rules for exponent selection based on minimum mean squared error (MSE) criteria are developed. Optimal rates for steep origin kernels that are based on exponentiating quadratic kernels are shown to be faster than those based on exponentiating the Bartlett kernel, which produces the sharp origin kernel. It is further shown that, unlike conventional kernel estimation where an optimal choice of kernel is possible in terms of MSE criteria (Priestley, 1962; Andrews, 1991), steep origin kernels are asymptotically MSE equivalent, so that choice of mother kernel does not matter asymptotically. The approach is extended to spectral estimation at frequencies omega < 0. Some simulation evidence is reported detailing the finite sample performance of steep kernel methods in LRV/HAC estimation and robust regression testing in comparison with sharp kernel and conventional (truncated) kernel methods.

Keywords: Exponentiated kernel, Lag kernel, Long run variance, Optimal exponent, Spectral window, Spectrum

JEL Classification: C22

Abstract

A new family of kernels is suggested for use in heteroskedasticity and autocorrelation consistent (HAC) and long run variance (LRV) estimation and robust regression testing. The kernels are constructed by taking powers of the Bartlett kernel and are intended to be used with no truncation (or bandwidth) parameter. As the power parameter (ρ) increases, the kernels become very sharp at the origin and increasingly downweight values away from the origin, thereby achieving effects similar to a bandwidth parameter. Sharp origin kernels can be used in regression testing in much the same way as conventional kernels with no truncation, as suggested in the work of Kiefer and Vogelsang (2002a, 2002b). A unified representation of HAC limit theory for untruncated kernels is provided using a new proof based on Mercer’s theorem that allows for kernels which may or may not be differentiable at the origin. This new representation helps to explain earlier findings like the dominance of the Bartlett kernel over quadratic kernels in test power and yields new findings about the asymptotic properties of tests with sharp origin kernels. Analysis and simulations indicate that sharp origin kernels lead to tests with improved size properties relative to conventional tests and better power properties than other tests using Bartlett and other conventional kernels without truncation.

If ρ is passed to infinity with the sample size (T), the new kernels provide consistent HAC and LRV estimates as well as continued robust regression testing. Optimal choice of rho based on minimizing the asymptotic mean squared error of estimation is considered, leading to a rate of convergence of the kernel estimate of T1/3, analogous to that of a conventional truncated Bartlett kernel estimate with an optimal choice of bandwidth. A data-based version of the consistent sharp origin kernel is obtained which is easily implementable in practical work.

Within this new framework, untruncated kernel estimation can be regarded as a form of conventional kernel estimation in which the usual bandwidth parameter is replaced by a power parameter that serves to control the degree of downweighting. Simulations show that in regression testing with the sharp origin kernel, the power properties are better than those with simple untruncated kernels (where ρ = 1) and at least as good as those with truncated kernels. Size is generally more accurate with sharp origin kernels than truncated kernels. In practice a simple fixed choice of the exponent parameter around ρ = 16 for the sharp origin kernel produces favorable results for both size and power in regression testing with sample sizes that are typical in econometric applications.

JEL Classification: C13; C14; C22; C51

Keywords: Consistent HAC estimation, Data determined kernel estimation, Long run variance, Mercer’s theorem, Power parameter, Sharp origin kernel