Skip to main content

Peter C. B. Phillips Publications

Publish Date
Discussion Paper
Abstract

T. W. Anderson did pathbreaking work in econometrics during his remarkable career as an eminent statistician. His primary contributions to econometrics are reviewed here, including his early research on estimation and inference in simultaneous equations models and reduced rank regression. Some of his later works that connect in important ways to econometrics are also briefly covered, including limit theory in explosive autoregression, asymptotic expansions, and exact distribution theory for econometric estimators. The research is considered in the light of its influence on subsequent and ongoing developments in econometrics, notably confidence interval construction under weak instruments and inference in mildly explosive regressions.

Discussion Paper
Abstract

Limit theory is developed for least squares regression estimation of a model involving time trend polynomials and a moving average error process with a unit root. Models with these features can arise from data manipulation such as overdifferencing and model features such as the presence of multicointegration. The impact of such features on the asymptotic equivalence of least squares and generalized least squares is considered. Problems of rank deficiency that are induced asymptotically by the presence of time polynomials in the regression are also studied, focusing on the impact that singularities have on hypothesis testing using Wald statistics and matrix normalization. The paper is largely pedagogical but contains new results, notational innovations, and procedures for dealing with rank deficiency that are useful in cases of wider applicability.

Discussion Paper
Abstract

A semiparametric triangular systems approach shows how multicointegration can occur naturally in an I(1) cointegrated regression model. The framework reveals the source of multicointegration as singularity of the long run error covariance matrix in an I(1) system, a feature noted but little explored in earlier work. Under such singularity, cointegrated I(1) systems embody a multicointegrated structure and may be analyzed and estimated without appealing to the associated I(2) system but with consequential asymptotic properties that can introduce asymptotic bias into conventional methods of cointegrating regression. The present paper shows how estimation of such systems may be accomplished under multicointegration without losing the nice properties that hold under simple cointegration, including mixed normality and pivotal inference. The approach uses an extended version of high-dimensional trend IV estimation with deterministic orthonormal instruments that leads to mixed normal limit theory and pivotal inference in singular multicointegrated systems in addition to standard cointegrated I(1) systems. Wald tests of general linear restrictions are constructed using a fixed-b long run variance estimator that leads to robust pivotal HAR inference in both cointegrated and multicointegrated cases. Simulations show the properties of the estimation and inferential procedures in finite samples, contrasting the cointegration and multicointegration cases. An empirical illustration to housing stocks, starts and completions is provided.

Discussion Paper
Abstract

This paper studies the estimation and inferences in panel threshold regression with unobserved individual-specific threshold effects which is important from the practical perspective and is a distinguishing feature from traditional linear panel data models. It is shown that the within-regime differencing in the static model or the within-regime first-differencing in the dynamic model cannot generate consistent estimators of the threshold, so the correlated random effects models are suggested to handle the endogeneity in such general panel threshold models. We provide a unified estimation and inference framework that is valid for both the static and dynamic models and regardless of whether the unobserved individual-specific threshold effects exist or not. Especially, we propose alternative inference methods for the model parameters, which have better theoretical properties than the existing methods. Simulation studies and an empirical application illustrate the usefulness of our new estimation and inference methodology in practice.

Discussion Paper
Abstract

The Hodrick-Prescott (HP) filter is one of the most widely used econometric methods in applied macroeconomic research. The technique is nonparametric and seeks to decompose a time series into a trend and a cyclical component unaided by economic theory or prior trend specification. Like all nonparametric methods, the HP filter depends critically on a tuning parameter that controls the degree of smoothing. Yet in contrast to modern nonparametric methods and applied work with these procedures, empirical practice with the HP filter almost universally relies on standard settings for the tuning parameter that have been suggested largely by experimentation with macroeconomic data and heuristic reasoning about the form of economic cycles and trends. As recent research \citep{phillips2015business} has shown, standard settings may not be adequate in removing trends, particularly stochastic trends, in economic data. This paper proposes an easy-to-implement practical procedure of iterating the HP smoother that is intended to make the filter a smarter smoothing device for trend estimation and trend elimination. We call this iterated HP technique the \emph{boosted HP filter} in view of its connection to $L_{2}$-boosting in machine learning. The paper develops limit theory to show that the boosted HP (bHP) filter asymptotically recovers trend mechanisms that involve unit root processes, deterministic polynomial drifts, and polynomial drifts with structural breaks, thereby covering the most common trends that appear in macroeconomic data and current modeling methodology. In doing so, the boosted filter provides a new mechanism for consistently estimating multiple structural breaks even without knowledge of the number of such breaks. A stopping criterion is used to automate the iterative HP algorithm, making it a data-determined method that is ready for modern data-rich environments in economic research. The methodology is illustrated using three real data examples that highlight the differences between simple HP filtering, the data-determined boosted filter, and an alternative autoregressive approach. These examples show that the bHP filter is helpful in analyzing a large collection of heterogeneous macroeconomic time series that manifest various degrees of persistence, trend behavior, and volatility. 

Discussion Paper
Abstract

Multicointegration is traditionally defined as a particular long run relationship among variables in a parametric vector autoregressive model that introduces links between these variables and partial sums of the equilibrium errors. This paper departs from the parametric model, using a semiparametric formulation that reveals the explicit role that singularity of the long run conditional covariance matrix plays in determining multicointegration. The semiparametric framework has the advantage that short run dynamics do not need to be modeled and estimation by standard techniques such as fully modified least squares (FM-OLS) on the original I(1) system is straightforward. The paper derives FM-OLS limit theory in the multicointegrated setting, showing how faster rates of convergence are achieved in the direction of singularity and that the limit distribution depends on the distribution of the conditional one-sided long run covariance estimator used in FM-OLS estimation. Wald tests of restrictions on the regression coefficients have nonstandard limit theory which depends on nuisance parameters in general. The usual tests are shown to be conservative when the restrictions are isolated to the directions of singularity and, under certain conditions, are invariant to singularity otherwise.  Simulations show that approximations derived in the paper work well in finite samples. We illustrate our findings by analyzing fiscal sustainability of the US  government over the post-war period. 

Discussion Paper
Abstract

Spatial units typically vary over many of their characteristics, introducing potential unobserved heterogeneity which invalidates commonly used homoskedasticity conditions. In the presence of unobserved heteroskedasticity, standard methods based on the (quasi-)likelihood function generally produce inconsistent estimates of both the spatial parameter and the coefficients of the exogenous regressors. A robust generalized method of moments estimator as well as a modified likelihood method have been proposed in the literature to address this issue. The present paper constructs an alternative indirect inference approach which relies on a simple ordinary least squares procedure as its starting point. Heteroskedasticity is accommodated by utilizing a new version of continuous updating that is applied within the indirect inference procedure to take account of the parametrization of the variance-covariance matrix of the disturbances. Finite sample performance of the new estimator is assessed in a Monte Carlo study and found to offer advantages over existing methods. The approach is implemented in an empirical application to house price data in the Boston area, where it is found that spatial effects in house price determination are much more significant under robustification to heterogeneity in the equation errors.

Discussion Paper
Abstract

The Hodrick-Prescott (HP) filter is one of the most widely used econometric methods in applied macroeconomic research. The technique is nonparametric and seeks to decompose a time series into a trend and a cyclical component unaided by economic theory or prior trend specification. Like all nonparametric methods, the HP filter depends critically on a tuning parameter that controls the degree of smoothing. Yet in contrast to modern nonparametric methods and applied work with these procedures, empirical practice with the HP filter almost universally relies on standard settings for the tuning parameter that have been suggested largely by experimentation with macroeconomic data and heuristic reasoning about the form of economic cycles and trends. As recent research has shown, standard settings may not be adequate in removing trends, particularly stochastic trends, in economic data. This paper proposes an easy-to-implement practical procedure of iterating the HP smoother that is intended to make the filter a smarter smoothing device for trend estimation and trend elimination. We call this iterated HP technique the boosted HP filter in view of its connection to L_2-boosting in machine learning. The paper develops limit theory to show that the boosted HP filter asymptotically recovers trend mechanisms that involve unit root processes, deterministic polynomial drifts, and polynomial drifts with structural breaks – the most common trends that appear in macroeconomic data and current modeling methodology. In doing so, the boosted filter provides a new mechanism for consistently estimating multiple structural breaks. A stopping criterion is used to automate the iterative HP algorithm, making it a data-determined method that is ready for modern data-rich environments in economic research. The methodology is illustrated using three real data examples that highlight the differences between simple HP filtering, the data-determined boosted filter, and an alternative autoregressive approach. These examples show that the boosted HP filter is helpful in analyzing a large collection of heterogeneous macroeconomic time series that manifest various degrees of persistence, trend behavior, and volatility. 

Abstract

This paper provides a novel mechanism for identifying and estimating latent group structures in panel data using penalized regression techniques. We focus on linear models where the slope parameters are heterogeneous across groups but homogenous within a group and the group membership is unknown. Two approaches are considered — penalized least squares (PLS) for models without endogenous regressors, and penalized GMM (PGMM) for models with endogeneity. In both cases we develop a new variant of Lasso called classifier-Lasso (C-Lasso) that serves to shrink individual coefficients to the unknown group-specific coefficients. C-Lasso achieves simultaneous classification and consistent estimation in a single step and the classification exhibits the desirable property of uniform consistency. For PLS estimation C-Lasso also achieves the oracle property so that group-specific parameter estimators are asymptotically equivalent to infeasible estimators that use individual group identity information. For PGMM estimation the oracle property of C-Lasso is preserved in some special cases. Simulations demonstrate good finite-sample performance of the approach both in classification and estimation. An empirical application investigating the determinants of cross-country savings rates finds two latent groups among 56 countries, providing empirical confirmation that higher savings rates go in hand with higher income growth.