Considerable evidence in past research shows size distortion in standard tests for zero autocorrelation or cross-correlation when time series are not independent identically distributed random variables, pointing to the need for more robust procedures. Recent tests for serial correlation and cross-correlation in Dalla, Giraitis, and Phillips (2022) provide a more robust approach, allowing for heteroskedasticity and dependence in un-correlated data under restrictions that require a smooth, slowly-evolving deterministic heteroskedasticity process. The present work removes those restrictions and validates the robust testing methodology for a wider class of heteroskedastic time series models and innovations. The updated analysis given here enables more extensive use of the methodology in practical applications. Monte Carlo experiments confirm excellent finite sample performance of the robust test procedures even for extremely complex white noise processes. The empirical examples show that use of robust testing methods can materially reduce spurious evidence of correlations found by standard testing procedures.
This paper studies a linear panel data model with interactive fixed effects wherein regressors, factors and idiosyncratic error terms are all stationary but with potential long memory. The setup involves a new factor model formulation for which weakly dependent regressors, factors and innovations are embedded as a special case. Standard methods based on principal component decomposition and least squares estimation, as in Bai (2009), are found to suffer bias correction failure because the order of magnitude of the bias is determined in a complex manner by the memory parameters. To cope with this failure and to provide a simple implementable estimation procedure, frequency domain least squares estimation is proposed. The limit distribution of this frequency domain approach is established and a hybrid selection method is developed to determine the number of factors. Simulations show that the frequency domain estimator is robust to short memory and outperforms the time domain estimator when long range dependence is present. An empirical illustration of the approach is provided, examining the long-run relationship between stock return and realized volatility.
A heteroskedasticity-autocorrelation robust (HAR) test statistic is proposed to test for the presence of explosive roots in financial or real asset prices when the equation errors are strongly dependent. Limit theory for the test statistic is developed and extended to heteroskedastic models. The new test has stable size properties unlike conventional test statistics that typically lead to size distortion and inconsistency in the presence of strongly dependent equation errors. The new procedure can be used to consistently time-stamp the origination and termination of an explosive episode under similar conditions of long memory errors. Simulations are conducted to assess the finite sample performance of the proposed test and estimators. An empirical application to the S&P 500 index highlights the usefulness of the proposed procedures in practical work.
The global financial crisis and Covid recession have renewed discussion concerning trend-cycle discovery in macroeconomic data, and boosting has recently upgraded the popular HP filter to a modern machine learning device suited to data-rich and rapid computational environments. This paper sheds light on its versatility in trend-cycle determination, explaining in a simple manner both HP filter smoothing and the consistency delivered by boosting for general trend detection. Applied to a universe of time series in FRED databases, boosting outperforms other methods in timely capturing downturns at crises and recoveries that follow. With its wide applicability the boosted HP filter is a useful automated machine learning addition to the macroeconometric toolkit.
This paper extends recent asymptotic theory developed for the Hodrick Prescott (HP) filter and boosted HP (bHP) filter to long range dependent time series that have fractional Brownian motion (fBM) limit processes after suitable standardization. Under general conditions it is shown that the asymptotic form of the HP filter is a smooth curve, analogous to the finding in Phillips and Jin (2021) for integrated time series and series with deterministic drifts. Boosting the filter using the iterative procedure suggested in Phillips and Shi (2021) leads under well defined rate conditions to a consistent estimate of the fBM limit process or the fBM limit process with an accompanying deterministic drift when that is present. A stopping criterion is used to automate the boosting algorithm, giving a data-determined method for practical implementation. The theory is illustrated in simulations and two real data examples that highlight the differences between simple HP filtering and the use of boosting. The analysis is assisted by employing a uniformly and almost surely convergent trigonometric series representation of fBM.
Functional coefficient (FC) regressions allow for systematic flexibility in the responsiveness of a dependent variable to movements in the regressors, making them attractive in applications where marginal effects may depend on covariates. Such models are commonly estimated by local kernel regression methods. This paper explores situations where responsiveness to covariates is locally flat or fixed. The paper develops new asymptotics that take account of shape characteristics of the function in the locality of the point of estimation. Both stationary and integrated regressor cases are examined. The limit theory of FC kernel regression is shown to depend intimately on functional shape in ways that affect rates of convergence, optimal bandwidth selection, estimation, and inference. In FC cointegrating regression, flat behavior materially changes the limit distribution by introducing the shape characteristics of the function into the limiting distribution through variance as well as centering. In the boundary case where the number of zero derivatives tends to infinity, near parametric rates of convergence apply in stationary and nonstationary cases. Implications for inference are discussed and a feasible pre-test inference procedure is proposed that takes unknown potential flatness into consideration and provides a practical approach to inference.
New methods are developed for identifying, estimating, and performing inference with nonstationary time series that have autoregressive roots near unity. The approach subsumes unit-root (UR), local unit-root (LUR), mildly integrated (MI), and mildly explosive (ME) specifications in the new model formulation. It is shown how a new parameterization involving a localizing rate sequence that characterizes departures from unity can be consistently estimated in all cases. Simple pivotal limit distributions that enable valid inference about the form and degree of nonstationarity apply for MI and ME specifications and new limit theory holds in UR and LUR cases. Normalizing and variance stabilizing properties of the new parameterization are explored. Simulations are reported that reveal some of the advantages of this alternative formulation of nonstationary time series. A housing market application of the methods is conducted that distinguishes the differing forms of house price behavior in Australian state capital cities over the past decade.
Limit theory is provided for a wide class of covariance functionals of a nonstationary process and stationary time series. The results are relevant to estimation and inference in nonlinear nonstationary regressions that involve unit root, local unit root or fractional processes and they include both parametric and nonparametric regressions. Self normalized versions of these statistics are considered that are useful in inference. Numerical evidence reveals a strong bimodality in the finite sample distributions that persists for very large sample sizes although the limit theory is Gaussian. New self normalized versions are introduced that deliver improved approximations.
T. W. Anderson did pathbreaking work in econometrics during his remarkable career as an eminent statistician. His primary contributions to econometrics are reviewed here, including his early research on estimation and inference in simultaneous equations models and reduced rank regression. Some of his later works that connect in important ways to econometrics are also briefly covered, including limit theory in explosive autoregression, asymptotic expansions, and exact distribution theory for econometric estimators. The research is considered in the light of its influence on subsequent and ongoing developments in econometrics, notably confidence interval construction under weak instruments and inference in mildly explosive regressions.
This paper explores weak identification issues arising in commonly used models of economic and financial time series. Two highly popular configurations are shown to be asymptotically observationally equivalent: one with long memory and weak autoregressive dynamics, the other with antipersistent shocks and a near-unit autoregressive root. We develop a data-driven semiparametric and identification-robust approach to inference that reveals such ambiguities and documents the prevalence of weak identification in many realized volatility and trading volume series. The identification-robust empirical evidence generally favors long memory dynamics in volatility and volume, a conclusion that is corroborated using social-media news flow data.
Limit theory is developed for least squares regression estimation of a model involving time trend polynomials and a moving average error process with a unit root. Models with these features can arise from data manipulation such as overdifferencing and model features such as the presence of multicointegration. The impact of such features on the asymptotic equivalence of least squares and generalized least squares is considered. Problems of rank deficiency that are induced asymptotically by the presence of time polynomials in the regression are also studied, focusing on the impact that singularities have on hypothesis testing using Wald statistics and matrix normalization. The paper is largely pedagogical but contains new results, notational innovations, and procedures for dealing with rank deficiency that are useful in cases of wider applicability.
A semiparametric triangular systems approach shows how multicointegration can occur naturally in an I(1) cointegrated regression model. The framework reveals the source of multicointegration as singularity of the long run error covariance matrix in an I(1) system, a feature noted but little explored in earlier work. Under such singularity, cointegrated I(1) systems embody a multicointegrated structure and may be analyzed and estimated without appealing to the associated I(2) system but with consequential asymptotic properties that can introduce asymptotic bias into conventional methods of cointegrating regression. The present paper shows how estimation of such systems may be accomplished under multicointegration without losing the nice properties that hold under simple cointegration, including mixed normality and pivotal inference. The approach uses an extended version of high-dimensional trend IV estimation with deterministic orthonormal instruments that leads to mixed normal limit theory and pivotal inference in singular multicointegrated systems in addition to standard cointegrated I(1) systems. Wald tests of general linear restrictions are constructed using a fixed-b long run variance estimator that leads to robust pivotal HAR inference in both cointegrated and multicointegrated cases. Simulations show the properties of the estimation and inferential procedures in finite samples, contrasting the cointegration and multicointegration cases. An empirical illustration to housing stocks, starts and completions is provided.
This paper studies the estimation and inferences in panel threshold regression with unobserved individual-specific threshold effects which is important from the practical perspective and is a distinguishing feature from traditional linear panel data models. It is shown that the within-regime differencing in the static model or the within-regime first-differencing in the dynamic model cannot generate consistent estimators of the threshold, so the correlated random effects models are suggested to handle the endogeneity in such general panel threshold models. We provide a unified estimation and inference framework that is valid for both the static and dynamic models and regardless of whether the unobserved individual-specific threshold effects exist or not. Especially, we propose alternative inference methods for the model parameters, which have better theoretical properties than the existing methods. Simulation studies and an empirical application illustrate the usefulness of our new estimation and inference methodology in practice.
Price bubbles in multiple assets are sometimes nearly coincident in occurrence. Such near-coincidence is strongly suggestive of co-movement in the associated asset prices and likely driven by certain factors that are latent in the financial or economic system with common effects across several markets. Can we detect the presence of such common factors at the early stages of their emergence? To answer this question, we build a factor model that includes I(1), mildly explosive, and stationary factors to capture normal, exuberant, and collapsing phases in such phenomena. The I(1) factor models the primary driving force of market fundamentals. The explosive and stationary factors model latent forces that underlie the formation and destruction of asset price bubbles, which typically exist only for subperiods of the sample. The paper provides an algorithm for testing the presence of and date-stamping the origination and termination of price bubbles determined by latent factors in a large-dimensional system embodying many markets. Asymptotics of the bubble test statistic are given under the null of no common bubbles and the alternative of a common bubble across these markets. We prove consistency of a factor bubble detection process for the origination and termination dates of the common bubble. Simulations show good finite sample performance of the testing algorithm in terms of its successful detection rates. Our methods are applied to real estate markets covering 89 major cities in China over the period January 2003 to March 2013. Results suggest the presence of three common bubble episodes in what are known as China’s Tier 1 and Tier 2 cities over the sample period. There appears to be little evidence of a common bubble in Tier 3 cities.