Skip to main content

Peter C. B. Phillips Publications

Publish Date
Discussion Paper
Abstract

A general asymptotic theory is established for sample cross moments of nonstationary time series, allowing for long range dependence and local unit roots. The theory provides a substantial extension of earlier results on nonparametric regression that include near-cointegrated nonparametric regression as well as spurious nonparametric regression. Many new models are covered by the limit theory, among which are functional coefficient regressions in which both regressors and the functional covariate are nonstationary. Simulations show finite sample performance matching well with the asymptotic theory and having broad relevance to applications, while revealing how dual nonstationarity in regressors and covariates raises sensitivity to bandwidth choice and the impact of dimensionality in nonparametric regression. An empirical example is provided involving climate data regression to assess Earth’s climate sensitivity to CO2, where nonstationarity is a prominent feature of both the regressors and covariates in the model. This application is the first rigorous empirical analysis to assess nonlinear impacts of CO2 on Earth’s climate.

Journal of Econometrics
Abstract

Datasets from field experiments with covariate-adaptive randomizations (CARs) usually contain extra covariates in addition to the strata indicators. We propose to incorporate these additional covariates via auxiliary regressions in the estimation and inference of unconditional quantile treatment effects (QTEs) under CARs. We establish the consistency and limit distribution of the regression-adjusted QTE estimator and prove that the use of multiplier bootstrap inference is non-conservative under CARs. The auxiliary regression may be estimated parametrically, nonparametrically, or via regularization when the data are high-dimensional. Even when the auxiliary regression is misspecified, the proposed bootstrap inferential procedure still achieves the nominal rejection probability in the limit under the null. When the auxiliary regression is correctly specified, the regression-adjusted estimator achieves the minimum asymptotic variance. We also discuss forms of adjustments that can improve the efficiency of the QTE estimators. The finite sample performance of the new estimation and inferential methods is studied in simulations, and an empirical application to a well-known dataset concerned with expanding access to basic bank accounts on savings is reported.

Econometric Theory
Abstract

This paper studies control function (CF) approaches in endogenous threshold regression where the threshold variable is allowed to be endogenous. We first use a simple example to show that the structural threshold regression (STR) estimator of the threshold point in Kourtellos, Stengos and Tan (2016, Econometric Theory 32, 827–860) is inconsistent unless the endogeneity level of the threshold variable is low compared to the threshold effect. We correct the CF in the STR estimator to generate our first CF estimator using a method that extends the two-stage least squares procedure in Caner and Hansen (2004, Econometric Theory 20, 813–843). We develop our second CF estimator which can be treated as an extension of the classical CF approach in endogenous linear regression. Both these approaches embody threshold effect information in the conditional variance beyond that in the conditional mean. Given the threshold point estimates, we propose new estimates for the slope parameters. The first is a by-product of the CF approach, and the second type employs generalized method of moment (GMM) procedures based on two new sets of moment conditions. Simulation studies, in conjunction with the limit theory, show that our second CF estimator and confidence interval for the threshold point together with the associated second GMM estimator and confidence interval for the slope parameter dominate the other methods. We further apply the new estimation methodology to an empirical application from international trade to illustrate its usefulness in practice.

Journal of Econometrics
Abstract

This paper studies high-dimensional vector autoregressions (VARs) augmented with common factors that allow for strong cross-sectional dependence. Models of this type provide a convenient mechanism for accommodating the interconnectedness and temporal co-variability that are often present in large dimensional systems. We propose an ℓ1-nuclear-norm regularized estimator and derive the non-asymptotic upper bounds for the estimation errors as well as large sample asymptotics for the estimates. A singular value thresholding procedure is used to determine the correct number of factors with probability approaching one. Both the LASSO estimator and the conservative LASSO estimator are employed to improve estimation precision. The conservative LASSO estimates of the non-zero coefficients are shown to be asymptotically equivalent to the oracle least squares estimates. Simulations demonstrate that our estimators perform reasonably well in finite samples given the complex high-dimensional nature of the model. In an empirical illustration we apply the methodology to explore dynamic connectedness in the volatilities of financial asset prices and the transmission of ‘investor fear’. The findings reveal that a large proportion of connectedness is due to the common factors. Conditional on the presence of these common factors, the results still document remarkable connectedness due to the interactions between the individual variables, thereby supporting a common factor augmented VAR specification.

Econometric Theory
Abstract

Spatial units typically vary over many of their characteristics, introducing potential unobserved heterogeneity which invalidates commonly used homoskedasticity conditions. In the presence of unobserved heteroskedasticity, methods based on the quasi-likelihood function generally produce inconsistent estimates of both the spatial parameter and the coefficients of the exogenous regressors. A robust generalized method of moments estimator as well as a modified likelihood method have been proposed in the literature to address this issue. The present paper constructs an alternative indirect inference (II) approach which relies on a simple ordinary least squares procedure as its starting point. Heteroskedasticity is accommodated by utilizing a new version of continuous updating that is applied within the II procedure to take account of the parameterization of the variance–covariance matrix of the disturbances. Finite-sample performance of the new estimator is assessed in a Monte Carlo study. The approach is implemented in an empirical application to house price data in the Boston area, where it is found that spatial effects in house price determination are much more significant under robustification to heterogeneity in the equation errors.

Journal of Econometrics
Abstract

Limit distribution theory in the econometric literature for functional coefficient cointegrating regression is incorrect in important ways, influencing rates of convergence, distributional properties, and practical work. The correct limit theory reveals that components from both bias and variance terms contribute to variability in the asymptotics. The errors in the literature arise because random variability in the bias term has been neglected in earlier research. In stationary regression this random variability is of smaller order and can be ignored in asymptotic analysis but not without consequences for finite sample performance. Implications of the findings for rate efficient estimation are discussed. Simulations in the Online Supplement provide further evidence supporting the new limit theory in nonstationary functional coefficient regressions.

Journal of Econometrics
Abstract

Multicointegration is traditionally defined as a particular long run relationship among variables in a parametric vector autoregressive model that introduces additional coin-tegrating links between these variables and partial sums of the equilibrium errors. This paper departs from the parametric model, using a semiparametric formulation that reveals the explicit role that singularity of the long run conditional covariance matrix plays in determining multicointegration. The semiparametric framework has the advantage that short run dynamics do not need to be modeled and estimation by standard techniques such as fully modified least squares (FM-OLS) on the original I (1) system is straightforward. The paper derives FM-OLS limit theory in the multicointe-grated setting, showing how faster rates of convergence are achieved in the direction of singularity and that the limit distribution depends on the distribution of the conditional one-sided long run covariance estimator used in FM-OLS estimation. Wald tests of restrictions on the regression coefficients have nonstandard limit theory which depends on nuisance parameters in general. The usual tests are shown to be conservative when the restrictions are isolated to the directions of singularity and, under certain conditions, are invariant to singularity otherwise. Simulations show that approximations derived in the paper work well in finite samples. The findings are illustrated empirically in an analysis of fiscal sustainability of the US government over the post-war period.

Discussion Paper
Abstract

Considerable evidence in past research shows size distortion in standard tests for zero autocorrelation or cross-correlation when time series are not independent identically distributed random variables, pointing to the need for more robust procedures. Recent tests for serial correlation and cross-correlation in Dalla, Giraitis, and Phillips (2022) provide a more robust approach, allowing for heteroskedasticity and dependence in un-correlated data under restrictions that require a smooth, slowly-evolving deterministic heteroskedasticity process. The present work removes those restrictions and validates the robust testing methodology for a wider class of heteroskedastic time series models and innovations. The updated analysis given here enables more extensive use of the methodology in practical applications. Monte Carlo experiments confirm excellent finite sample performance of the robust test procedures even for extremely complex white noise processes. The empirical examples show that use of robust testing methods can materially reduce spurious evidence of correlations found by standard testing procedures.

Discussion Paper
Abstract

This paper studies a linear panel data model with interactive fixed effects wherein regressors, factors and idiosyncratic error terms are all stationary but with potential long memory. The setup involves a new factor model formulation for which weakly dependent regressors, factors and innovations are embedded as a special case. Standard methods based on principal component decomposition and least squares estimation, as in Bai (2009), are found to suffer bias correction failure because the order of magnitude of the bias is determined in a complex manner by the memory parameters. To cope with this failure and to provide a simple implementable estimation procedure, frequency domain least squares estimation is proposed. The limit distribution of this frequency domain approach is established and a hybrid selection method is developed to determine the number of factors. Simulations show that the frequency domain estimator is robust to short memory and outperforms the time domain estimator when long range dependence is present. An empirical illustration of the approach is provided, examining the long-run relationship between stock return and realized volatility.

Discussion Paper
Abstract

A heteroskedasticity-autocorrelation robust (HAR) test statistic is proposed to test for the presence of explosive roots in financial or real asset prices when the equation errors are strongly dependent. Limit theory for the test statistic is developed and extended to heteroskedastic models. The new test has stable size properties unlike conventional test statistics that typically lead to size distortion and inconsistency in the presence of strongly dependent equation errors. The new procedure can be used to consistently time-stamp the origination and termination of an explosive episode under similar conditions of long memory errors. Simulations are conducted to assess the finite sample performance of the proposed test and estimators. An empirical application to the S&P 500 index highlights the usefulness of the proposed procedures in practical work.

Discussion Paper
Abstract

The global financial crisis and Covid recession have renewed discussion concerning trend-cycle discovery in macroeconomic data, and boosting has recently upgraded the popular HP filter to a modern machine learning device suited to data-rich and rapid computational environments. This paper sheds light on its versatility in trend-cycle determination, explaining in a simple manner both HP filter smoothing and the consistency delivered by boosting for general trend detection. Applied to a universe of time series in FRED databases, boosting outperforms other methods in timely capturing downturns at crises and recoveries that follow. With its wide applicability the boosted HP filter is a useful automated machine learning addition to the macroeconometric toolkit.

Discussion Paper
Abstract

This paper extends recent asymptotic theory developed for the Hodrick Prescott (HP) filter and boosted HP (bHP) filter to long range dependent time series that have fractional Brownian motion (fBM) limit processes after suitable standardization. Under general conditions it is shown that the asymptotic form of the HP filter is a smooth curve, analogous to the finding in Phillips and Jin (2021) for integrated time series and series with deterministic drifts. Boosting the filter using the iterative procedure suggested in Phillips and Shi (2021) leads under well defined rate conditions to a consistent estimate of the fBM limit process or the fBM limit process with an accompanying deterministic drift when that is present. A stopping criterion is used to automate the boosting algorithm, giving a data-determined method for practical implementation. The theory is illustrated in simulations and two real data examples that highlight the differences between simple HP filtering and the use of boosting. The analysis is assisted by employing a uniformly and almost surely convergent trigonometric series representation of fBM.

Economic Theory
Abstract

Functional coefficient (FC) regressions allow for systematic flexibility in the responsiveness of a dependent variable to movements in the regressors, making them attractive in applications where marginal effects may depend on covariates. Such models are commonly estimated by local kernel regression methods. This paper explores situations where responsiveness to covariates is locally flat or fixed. The paper develops new asymptotics that take account of shape characteristics of the function in the locality of the point of estimation. Both stationary and integrated regressor cases are examined. The limit theory of FC kernel regression is shown to depend intimately on functional shape in ways that affect rates of convergence, optimal bandwidth selection, estimation, and inference. In FC cointegrating regression, flat behavior materially changes the limit distribution by introducing the shape characteristics of the function into the limiting distribution through variance as well as centering. In the boundary case where the number of zero derivatives tends to infinity, near parametric rates of convergence apply in stationary and nonstationary cases. Implications for inference are discussed and a feasible pre-test inference procedure is proposed that takes unknown potential flatness into consideration and provides a practical approach to inference.

Econometric Theory
Abstract

New methods are developed for identifying, estimating, and performing inference with nonstationary time series that have autoregressive roots near unity. The approach subsumes unit-root (UR), local unit-root (LUR), mildly integrated (MI), and mildly explosive (ME) specifications in the new model formulation. It is shown how a new parameterization involving a localizing rate sequence that characterizes departures from unity can be consistently estimated in all cases. Simple pivotal limit distributions that enable valid inference about the form and degree of nonstationarity apply for MI and ME specifications and new limit theory holds in UR and LUR cases. Normalizing and variance stabilizing properties of the new parameterization are explored. Simulations are reported that reveal some of the advantages of this alternative formulation of nonstationary time series. A housing market application of the methods is conducted that distinguishes the differing forms of house price behavior in Australian state capital cities over the past decade.

Discussion Paper
Abstract

Limit theory is provided for a wide class of covariance functionals of a nonstationary process and stationary time series. The results are relevant to estimation and inference in nonlinear nonstationary regressions that involve unit root, local unit root or fractional processes and they include both parametric and nonparametric regressions. Self normalized versions of these statistics are considered that are useful in inference. Numerical evidence reveals a strong bimodality in the finite sample distributions that persists for very large sample sizes although the limit theory is Gaussian. New self normalized versions are introduced that deliver improved approximations.