This note shows that the mixed normal asymptotic limit of the trend IV estimator with a fixed number of deterministic instruments (fTIV) holds in both singular (multicointegrated) and nonsingular cointegration systems, thereby relaxing the exogeneity condition in (Phillips and Kheifets, 2024, Theorem 1(ii)). The mixed normality of the limiting distribution of fTIV allows for asymptotically pivotal F tests about the cointegration parameters and for simple efficiency comparisons of the estimators for different numbers K of instruments, as well as comparisons with the trend IV estimator when K → ∞ with the sample size.
In GMM estimation, it is well known that if the moment dimension grows with the sample size, the asymptotics of GMM differ from the standard finite dimensional case. The present work examines the asymptotic properties of infinite dimensional GMM estimation when the weight matrix is formed by inverting Brownian motion or Brownian bridge covariance kernels. These kernels arise in econometric work such as minimum Cramer-von Mises distance estimation when testing distributional specification. The properties of GMM estimation are studied under different environments where the moment conditions converge to a smooth Gaussian or non-differentiable Gaussian process. Conditions are also developed for testing the validity of the moment conditions by means of a suitably constructed J-statistic. In case these conditions are invalid we propose another test called the U-test. As an empirical application of these infinite dimensional GMM procedures the evolution of cohort labor income inequality indices is studied using the Continuous Work History Sample database. The findings show that labor income inequality indices are maximized at early career years, implying that economic policies to reduce income inequality should be more effective when designed for workers at an early stage in their career cycles.
This paper builds on methodology that corrects for irregular spacing between realizations of unevenly spaced time series and provides appropriately corrected estimates of autoregressive model parameters. Using these methods for dealing with missing data, we develop time series tools for forecasting and estimation of autoregressions with cyclically varying parameters in which periodicity is assumed. To illustrate the robustness and flexibility of the methodology, an application is conducted to model daily temperature data. The approach helps to uncover cyclical (daily as well as annual) patterns in the data without imposing restrictive assumptions. Using the Central England Temperature (CET) time series (1772 - present) we find with a high level of accuracy that temperature intra-year averages and persistence have increased in the later sample 1850-2020 compared to 1772 - 1850, especially for the winter months, whereas the estimated variance of the random shocks in the autoregression seems to have decreased over time.
New limit theory is provided for a wide class of sample variance and covariance functionals involving both nonstationary and stationary time series. Sample functionals of this type commonly appear in regression applications and the asymptotics are particularly relevant to estimation and inference in nonlinear nonstationary regressions that involve unit root, local unit root or fractional processes. The limit theory is unusually general in that it covers both parametric and nonparametric regressions. Self normalized versions of these statistics are considered that are useful in inference. Numerical evidence reveals interesting strong bimodality in the finite sample distributions of conventional self normalized statistics similar to the bimodality that can arise in t-ratio statistics based on heavy tailed data. Bimodal behavior in these statistics is due to the presence of long memory innovations and is shown to persist for very large sample sizes even though the limit theory is Gaussian when the long memory innovations are stationary. Bimodality is shown to occur even in the limit theory when the long memory innovations are nonstationary. To address these complications new self normalized versions of the test statistics are introduced that deliver improved approximations that can be used for inference.
Limit theory for functional coefficient cointegrating regression was recently found to be considerably more complex than earlier understood. The issues were explained and correct limit theory derived for the kernel weighted local constant estimator in Phillips and Wang (2023b). The present paper provides complete limit theory for the general kernel weighted local p-th order polynomial estimator of the functional coefficient and the coefficient deriva-tives. Both stationary and nonstationary regressors are allowed. Implications for bandwidth selection are discussed. An adaptive procedure to select the fit order p is proposed and found to work well. A robust t-ratio is constructed following the new correct limit theory, which corrects and improves the usual t-ratio in the literature. Furthermore, the robust t-ratio is valid and works well regardless of the properties of the regressors, thereby providing a unified procedure to compute the t-ratio and facilitating practical inference. Testing constancy of the functional coefficient is also considered. Supportive finite sample studies are provided that corroborate the new asymptotic theory.
Financial econometrics is a dynamic discipline that began to take on its present form around the turn of the century. Since then it has found a permanent position as a popular course sequence in both undergraduate and graduate teaching programs in economics, finance, and business schools. Because of the breadth of the subject’s foundations, its extensive coverage in applications and because these courses attract a wide range of students with accompanying interests and skill sets that cover diverse areas and technical capabilities, teaching financial econometrics presents many challenges to the university educator. This chapter addresses some of these challenges, provides helpful guidelines to educators, and draws on the combined experience of the authors as teachers and researchers of modern financial econometrics as well as their recent textbook Financial Econometric Modeling (Hurn et al., 2021). The focus is on students converting to finance and econometrics with limited technical background
A new self-weighted least squares (LS) estimation theory is developed for local unit root (LUR) autoregression with heteroskedasticity. The proposed estimator has a mixed Gaussian limit distribution and the corresponding studentized statistic converges to a standard normal distribution free of the unknown localizing coefficient which is not consistently estimable. The estimator is super consistent with a convergence rate slightly below the OP (n) rate of LS estimation. The asymptotic theory relies on a new framework of convergence to the local time of a Gaussian process, allowing for the sample moments generated from martingales and many other integrated dependent sequences. A new unit root (UR) test in augmented autoregression is developed using self-weighted estimation and the methods are employed in predictive regression, providing an alternative approach to IVX regression. Simulation results showing good finite sample performance of these methods are reported together with a small empirical application.
Functional coefficient (FC) cointegrating regressions offer empirical investigators flexibility in modeling economic relationships by introducing covariates that influence the direction and intensity of comovement among nonstationary time series. FC regression models are also useful when formal cointegration is absent, in the sense that the equation errors may themselves be nonstationary, but where the nonstationary series display well-defined FC linkages that can be meaningfully interpreted as correlation measures involving the covariates. The present paper proposes new nonparametric estimators for such FC regression models where the nonstationary series display linkages that enable consistent estimation of the correlation measures between them. Specifically, we develop √n-consistent estimators for the functional coefficient and establish their asymptotic distributions, which involve mixed normal limits that facilitate inference. Two novel features that appear in the limit theory are (i) the need for non-diagonal matrix normalization due to the presence of stationary and nonstationary components in the regression; and (ii) random bias elements that appear in the asymptotic distribution of the kernel estimators, again resulting from the nonstationary regression components. Numerical studies reveal that the proposed estimators achieve significant efficiency improvements compared to the estimators suggested in earlier work by Sun et al. (2011). Easily implementable specification tests with standard chi-square asymptotics are suggested to check for constancy of the functional coefficient. These tests are shown to have faster divergence rate under local alternatives and enjoy superior performance in simulations than tests proposed recently in Gan et al. (2014). An empirical application based on the quantity theory of money illustrates the practical use of correlated but non-cointegrated regression relations.
Considerable evidence in past research shows size distortion in standard tests for zero autocorrelation or zero cross-correlation when time series are not independent identically distributed random variables, pointing to the need for more robust procedures. Recent tests for serial correlation and cross-correlation in Dalla, Giraitis, and Phillips (2022) provide a more robust approach, allowing for heteroskedasticity and dependence in uncorrelated data under restrictions that require a smooth, slowly-evolving deterministic heteroskedasticity process. The present work removes those restrictions and validates the robust testing methodology for a wider class of innovations and regression residuals allowing for heteroscedastic uncorrelated and non-stationary data settings. The updated analysis given here enables more extensive use of the methodology in practical applications. Monte Carlo experiments confirm excellent finite sample performance of the robust test procedures even for extremely complex white noise processes. The empirical examples show that use of robust testing methods can materially reduce spurious evidence of correlations found by standard testing procedures.
This paper considers a linear panel model with interactive fixed effects and unobserved individual and time heterogeneities that are captured by some latent group structures and an unknown structural break, respectively. To enhance realism, the model may have different numbers of groups and/or different group memberships before and after the break. With preliminary nuclear norm regularized estimation followed by row- and column-wise linear regressions, we estimate the break point based on the idea of binary segmentation and the latent group structures together with the number of groups before and after the break by sequential testing K-means algorithm simultaneously. It is shown that the break point, the number of groups and the group memberships can each be estimated correctly with probability approaching one. Asymptotic distributions of the estimators of the slope coefficients are established. Monte Carlo simulations demonstrate excellent finite sample performance for the proposed estimation algorithm. An empirical application to real house price data across 377 Metropolitan Statistical Areas in the US from 1975 to 2014 suggests the presence both of structural breaks and of changes in group membership.
A heteroskedasticity-autocorrelation robust (HAR) test statistic is proposed to test for the presence of explosive roots in financial or real asset prices when the equation errors are strongly dependent. Limit theory for the test statistic is developed and extended to heteroskedastic models. The new test has stable size properties unlike conventional test statistics that typically lead to size distortion and inconsistency in the presence of strongly dependent equation errors. The new procedure can be used to consistently time-stamp the origination and termination of an explosive episode under similar conditions of long memory errors. Simulations are conducted to assess the finite sample performance of the proposed test and estimators. An empirical application to the S&P 500 index highlights the usefulness of the proposed procedures in practical work.
A semiparametric triangular systems approach shows how multicointegrating linkages occur naturally in an cointegrated regression model when the long run error variance matrix in the system is singular. Under such singularity, cointegrated systems embody a multicointegrated structure that makes them useful in many empirical settings. Earlier work shows that such systems may be analyzed and estimated without appealing to the associated system but with suboptimal convergence rates and potential asymptotic bias. The present paper develops a robust approach to estimation and inference of such systems using high dimensional IV methods that have appealing asymptotic properties like those known to apply in the optimal estimation of cointegrated systems (Phillips, 1991). The approach uses an extended version of high-dimensional trend IV (Phillips, 2006, 2014) estimation with deterministic orthonormal instruments. The methods and derivations involve new results on high-dimensional IV techniques and matrix normalization in the limit theory that are of independent interest. Wald tests of general linear restrictions are constructed using a fixed- long run variance estimator that leads to robust pivotal HAR inference in both cointegrated and multicointegrated cases. Simulations show good properties of the estimation and inferential procedures in finite samples. An empirical illustration to housing stocks, starts and completions is provided.
This study provides new mechanisms for identifying and estimating explosive bubbles in mixed-root panel autoregressions with a latent group structure. A postclustering approach is employed that combines k-means clustering with right-tailed panel-data testing. Uniform consistency of the k-means algorithm is established. Pivotal null limit distributions of the tests are introduced. A new method is proposed to consistently estimate the number of groups. Monte Carlo simulations show that the proposed methods perform well in finite samples; and empirical applications of the proposed methods identify bubbles in the U.S. and Chinese housing markets and the U.S. stock market.
This paper points out some pitfalls in the use of two-way fixed effects (TWFE) regressions when outcome variables contain nonlinear or stochastic trend components. When a policy change shifts trend paths of outcome variables conventional TWFE estimation can distort results and invalidate inference. A robust solution is proposed by identifying determinants of dynamic club membership based on the idea of relative convergence, which can be assessed empirically by the so-called ‘logt’ test (Phillips & Sul, 2007a). Club membership in each time period is estimated by recursive regression, transforming outcome variables to statistically stable, stationary status. Time varying club membership can then be used to identify the determinants of club memberships by running a panel logit or ordered logit regression. This approach is applied to study COVID-19 vaccination data across 50 states and the District of Columbia (DC). A new weekly database is created to track individual state and DC vaccination policies and mandates over the period from March 2021 to February 2022. Initially two convergent clubs are identified. Later evidence of the vaccination rates across states reveals a single convergent club. The primary determinant of this merger of sub-clubs is found to be federal-level vaccine mandates.