Skip to main content

Mico Loretan Publications

Publish Date
Abstract

This paper studies tests for covariance stationarity under conditions which permit failure in the existence of fourth order moments. The problem is important because many econometric diagnostics such as tests for parameter constancy, constant variance and ARCH and GARCH effects routinely rely on fourth moment conditions. Moreover, such tests have recently been extensively employed with financial and commodity market data, where fourth moment conditions may well be quite tenuous and are usually untested. This paper considers several tests for covariance stationarity including sample split prediction tests, cusum of squares tests and modified scaled range tests. When fourth moment conditions fail we show how the asymptotic theory for these tests involves functionals of an asymmetric stable Levy process, in place of conventional standard normal or Brownian bridge asymptotics. An interesting outcome of the new asymptotics is that the power of these tests depends critically on the tail thickness in the data. Thus, for data with no finite second moment, the above mentioned tests are inconsistent. Some new tests for heterogeneity are suggested that are consistent in the infinite variance case. These are easily implemented and rely on standard normal asymptotics. A consistent estimator of the maximal moment exponent of a distribution is also proposed. Again this estimator is easily implemented, has standard normal asymptotics and leads to a simple test for the existence of moments up to a given order. An empirical application of these methods to the monthly stock return data recently studied in Pagan and Schwert (1989a, 1989b) and the daily returns of the Standard and Poors 500 stock index is presented.

Keywords: Asymmetric stable process, characteristic exponent, covariance stationarity, cusum of squares test, maximal moment exponent, sample split prediction test, scaled range, stable Levy bridge, stock returns

JEL Classification: 211

Abstract

Our subject is econometric estimation and inference concerning long-run economic equilibria in models with stochastic trends. Our interest is focused on single equation specifications such as those employed in the Error Correction Model (ECM) methodology of David Hendry (1987, 1989 inter alia) and the semiparametric modified least squares method of Phillips and Hansen (1989). We start by reviewing the prescriptions for empirical time series research that are presently available. We argue that the diversity of choices is confusing to practitioners and obscures the fact that statistical theory is clear about optimal inference procedures. Part of the difficulty arises from the many alternative time series representations of cointegrated systems. We present a detailed analysis of these various representations, the links between them, and the estimator choices to which they lead. An asymptotic theory is provided for a wide menu of econometric estimators and system specifications, accommodating different levels of prior information about the presence of unit roots and the nature of short-run dynamic adjustments. The single equation ECM approach is studied in detail and our results lead to certain recommendations. Weak exogeneity and data coherence are generally insufficient for valid conditioning on the regressors in this approach. Strong exogeneity and data coherency are sufficient to validate conditioning. But the requirement of strong exogeneity rules out most cases of interest because long-run economic equilibrium typically relates interdependent variables for which there is substantial time series feedback. One antidote for this problem in practice is the inclusion of leads as well as lags in the differences of the regressors. The simulations that we report, as well as the asymptotic theory support the use of this procedure in practice. Our results also support the use of dynamic specifications that involve lagged long-run equilibrium relations rather than lagged differences in the dependent variable. Finally, our simulations point to problems of overfitting in single equation ECM’s. These appear to have important implications for empirical research in terms of size distortions that are produced in significance tests that utilize nominal critical values delivered by conventional asymptotic theory. In sum, our results indicate that the single equation ECM methodology has good potential for further development and improvement. But in comparison with the semi parametric modified least squares method of Phillips and Hansen (1989) the latter method seems superior for inferential purposes in most cases.

Keywords: Co-integration, long-run equilibrium, error correction, semiparametric estimation, asymptotic theory, exogeneity

JEL Classification: 211

Abstract

This paper studies the properties of the von Neumann ratio for time series with infinite variance. The asymptotic theory is developed using recent results on the weak convergence of partial sums of time series with infinite variance to stable processes and of sample serial correlations to functions of stable variables. Our asymptotics cover the null of iid variates and general moving average (MA) alternatives. Regression residuals are also considered. In the static regression model the Durbin-Watson statistic has the same limit distribution as the von Neumann ratio under general conditions. However, the dynamic models, the results are more complex and more interesting. When the regressors have thicker tail probabilities than the errors we find that the Durbin-Watson and von Neumann ration asymptotics are the same.