A heteroskedasticity-autocorrelation robust (HAR) test statistic is proposed to test for the presence of explosive roots in financial or real asset prices when the equation errors are strongly dependent. Limit theory for the test statistic is developed and extended to heteroskedastic models. The new test has stable size properties unlike conventional test statistics that typically lead to size distortion and inconsistency in the presence of strongly dependent equation errors. The new procedure can be used to consistently time-stamp the origination and termination of an explosive episode under similar conditions of long memory errors. Simulations are conducted to assess the finite sample performance of the proposed test and estimators. An empirical application to the S&P 500 index highlights the usefulness of the proposed procedures in practical work.
This paper explores weak identiﬁcation issues arising in commonly used models of economic and ﬁnancial time series. Two highly popular conﬁgurations are shown to be asymptotically observationally equivalent: one with long memory and weak autoregressive dynamics, the other with antipersistent shocks and a near-unit autoregressive root. We develop a data-driven semiparametric and identiﬁcation-robust approach to inference that reveals such ambiguities and documents the prevalence of weak identiﬁcation in many realized volatility and trading volume series. The identiﬁcation-robust empirical evidence generally favors long memory dynamics in volatility and volume, a conclusion that is corroborated using social-media news flow data.
This study provides new mechanisms for identifying and estimating explosive bubbles in mixed-root panel autoregressions with a latent group structure. A post-clustering approach is employed that combines a recursive k-means clustering algorithm with panel-data test statistics for testing the presence of explosive roots in time series trajectories. Uniform consistency of the k-means clustering algorithm is established, showing that the post-clustering estimate is asymptotically equivalent to the oracle counterpart that uses the true group identities. Based on the estimated group membership, right-tailed self-normalized t-tests and coeﬀicient-based J-tests, each with pivotal limit distributions, are introduced to detect the explosive roots. The usual Information Criterion (IC) for selecting the correct number of groups is found to be inconsistent and a new method that combines IC with a Hausman-type speciﬁcation test is proposed that consistently estimates the true number of groups. Extensive Monte Carlo simulations provide strong evidence that in ﬁnite samples, the recursive k-means clustering algorithm can correctly recover latent group membership in data of this type and the proposed post-clustering panel-data tests lead to substantial power gains compared with the time series approach. The proposed methods are used to identify bubble behavior in US and Chinese housing markets, and the US stock market, leading to new ﬁndings concerning speculative behavior in these markets.
This paper studies a continuous time dynamic system with a random persistence parameter. The exact discrete time representation is obtained and related to several discrete time random coeﬀicient models currently in the literature. The model distinguishes various forms of unstable and explosive behaviour according to speciﬁc regions of the parameter space that open up the potential for testing these forms of extreme behaviour. A two-stage approach that employs realized volatility is proposed for the continuous system estimation, asymptotic theory is developed, and test statistics to identify the diﬀerent forms of extreme sample path behaviour are proposed. Simulations show that the proposed estimators work well in empirically realistic settings and that the tests have good size and power properties in discriminating characteristics in the data that diﬀer from typical unit root behaviour. The theory is extended to cover models where the random persistence parameter is endogenously determined. An empirical application based on daily real S&P 500 index data over 1964-2015 reveals strong evidence against parameter constancy after early 1980, which strengthens after July 1997, leading to a long duration of what the model characterizes as extreme behaviour in real stock prices.
This paper develops a new hedonic method for constructing a real estate price index that utilizes all transaction price information that encompasses both single-sale and repeat-sale properties. The new method is less prone to speciﬁcation errors than standard hedonic methods and uses all available data. Like the Case-Shiller repeat-sales method, the new method has the advantage of being computationally eﬀicient. In an empirical analysis of the methodology, we ﬁt the model to all transaction prices for private residential property holdings in Singapore between Q1 1995 and Q2 2014, covering several periods of major price fluctuation and changes in government macroprudential policy. Two new indices are created, one from all transaction prices and one from single-sales prices. The indices are compared with the S&P/Case-Shiller index. The result shows that the new indices slightly outperform the S&P/Case-Shiller index in predicting the price of single-sales homes out-of-sample. However, they underperform the S&P/Case-Shiller index in predicting the price of repeat-sales homes out-of-sample. The empirical ﬁndings indicate that speciﬁcation bias can be more substantial than the sample selection bias when constructing a real estate price index. In a further empirical application, the recursive method of Phillips, Shi and Yu (2014) is used to detect explosive periods in real estate prices of Singapore. The results conﬁrm the existence of an explosive period from Q4 2006 to Q1 2008. No explosive period is found after 2009, suggesting that the ten successive rounds of cooling measures implemented by the Singapore government have been eﬀective in changing price dynamics and preventing a subsequent outbreak of explosive behavior in the Singapore real estate market.
This paper provides the limit theory of real time dating algorithms for bubble detection that were suggested in Phillips, Wu and Yu (2011, PWY) and Phillips, Shi and Yu (2013b, PSY). Bubbles are modeled using mildly explosive bubble episodes that are embedded within longer periods where the data evolves as a stochastic trend, thereby capturing normal market behavior as well as exuberance and collapse. Both the PWY and PSY estimates rely on recursive right tailed unit root tests (each with a diﬀerent recursive algorithm) that may be used in real time to locate the origination and collapse dates of bubbles. Under certain explicit conditions, the moving window detector of PSY is shown to be a consistent dating algorithm even in the presence of multiple bubbles. The other algorithms are consistent detectors for bubbles early in the sample and, under stronger conditions, for subsequent bubbles in some cases. These asymptotic results and accompanying simulations guide the practical implementation of the procedures. They indicate that the PSY moving window detector is more reliable than the PWY strategy, sequential application of the PWY procedure and the CUSUM procedure.
Right-tailed unit root tests have proved promising for detecting exuberance in economic and ﬁnancial activities. Like left-tailed tests, the limit theory and test performance are sensitive to the null hypothesis and the model speciﬁcation used in parameter estimation. This paper aims to provide some empirical guidelines for the practical implementation of right-tailed unit root tests, focussing on the sup ADF test of Phillips, Wu and Yu (2011), which implements a right-tailed ADF test repeatedly on a sequence of forward sample recursions. We analyze and compare the limit theory of the sup ADF test under diﬀerent hypotheses and model speciﬁcations. The size and power properties of the test under various scenarios are examined in simulations and some recommendations for empirical practice are given. Empirical applications to the Nasdaq and to Australian and New Zealand housing data illustrate these speciﬁcation issues and reveal their practical importance in testing.
Identifying and dating explosive bbles when there is periodically collapsing behavior over time has been a major concern in the economics literature and is of great importance for practitioners. The complexity of the nonlinear structure inherent in multiple bubble phenomena within the same sample period makes econometric analysis particularly diﬀicult. The present paper develops new recursive procedures for practical implementation and surveillance strategies that may be employed by central banks and ﬁscal regulators. We show how the testing procedure and dating algorithm of Phillips, Wu and Yu (2011, PWY) are aﬀected by multiple bubbles and may fail to be consistent. The present paper proposes a generalized version of the sup ADF test of PWY to address this diﬀiculty, derives its asymptotic distribution, introduces a new date-stamping strategy for the origination and termination of multiple bubbles, and proves consistency of this dating procedure. Simulations show that the test signiﬁcantly improves discriminatory power and leads to distinct power gains when multiple bubbles occur. Empirical applications are conducted to S&P 500 stock market data over a long historical period from January 1871 to December 2010. The new approach identiﬁes many key historical episodes of exuberance and collapse over this period, whereas the strategy of PWY and the CUSUM procedure locate far fewer episodes in the same sample range.
Multivariate continuous time models are now widely used in economics and ﬁnance. Empirical applications typically rely on some process of discretization so that the system may be estimated with discrete data. This paper introduces a framework for discretizing linear multivariate continuous time systems that includes the commonly used Euler and trapezoidal approximations as special cases and leads to a general class of estimators for the mean reversion matrix. Asymptotic distributions and bias formulae are obtained for estimates of the mean reversion parameter. Explicit expressions are given for the discretization bias and its relationship to estimation bias in both multivariate and in univariate settings. In the univariate context, we compare the performance of the two approximation methods relative to exact maximum likelihood (ML) in terms of bias and variance for the Vasicek process. The bias and the variance of the Euler method are found to be smaller than the trapezoidal method, which are in turn smaller than those of exact ML. Simulations suggest that when the mean reversion is slow the approximation methods work better than ML, the bias formulae are accurate, and for scalar models the estimates obtained from the two approximate methods have smaller bias and variance than exact ML. For the square root process, the Euler method outperforms the Nowman method in terms of both bias and variance. Simulation evidence indicates that the Euler method has smaller bias and variance than exact ML, Nowman’s method and the Milstein method.
A new recursive regression methodology is introduced to analyze the bubble characteristics of various ﬁnancial time series during the subprime crisis. The methods modify a technique proposed in Phillips, Wu and Yu (2010) and provide a technology for identifying bubble behavior and consistent dating of their origination and collapse. The tests also serve as an early warning diagnostic of bubble activity. Seven relevant ﬁnancial series are investigated, including three ﬁnancial assets (the Nasdaq index, home price index and asset-backed commercial paper), two commodities (the crude oil price and platinum price), one bond rate (Baa), and one exchange rate (Pound/USD). Statistically signiﬁcant bubble characteristics are found in all of these series. The empirical estimates of the origination and collapse dates suggest an interesting migration mechanism among the ﬁnancial variables: a bubble ﬁrst emerged in the equity market during mid-1995 lasting to the end of 2000, followed by a bubble in the real estate market between January 2001 and July 2007 and in the mortgage market between November 2005 and August 2007. After the subprime crisis erupted, the phenomenon migrated selectively into the commodity market and the foreign exchange market, creating bubbles which subsequently burst at the end of 2008, just as the eﬀects on the real economy and economic growth became manifest. Our empirical estimates of the origination and collapse dates match well with the general datetimes of this crisis put forward in a recent study by Caballero, Farhi and Gourinchas (2008).
A recursive test procedure is suggested that provides a mechanism for testing explosive behavior, date-stamping the origination and collapse of economic exuberance, and providing valid conﬁdence intervals for explosive growth rates. The method involves the recursive implementation of a right-side unit root test and a sup test, both of which are easy to use in practical applications, and some new limit theory for mildly explosive processes. The test procedure is shown to have discriminatory power in detecting periodically collapsing bubbles, thereby overcoming a weakness in earlier applications of unit root tests for economic bubbles. An empirical application to Nasdaq stock price index in the 1990s provides conﬁrmation of explosiveness and date-stamps the origination of ﬁnancial exuberance to mid -1995, prior to the famous remark in December 1996 by Alan Greenspan about irrational exuberance in ﬁnancial markets, thereby giving the remark empirical content.
A model of price determination is proposed that incorporates flat trading features into an eﬀicient price process. The model involves the superposition of a Brownian semimartingale process for the eﬀicient price and a Bernoulli process that determines the extent of flat price trading. A limit theory for the conventional realized volatility (RV) measure of integrated volatility is developed. The results show that RV is still consistent but has an inflated asymptotic variance that depends on the probability of flat trading. Estimated quarticity is similarly aﬀected, so that both the feasible central limit theorem and the inferential framework suggested in Barndorﬀ-Nielson and Shephard (2002) remain valid under flat price trading.
A new methodology is proposed to estimate theoretical prices of ﬁnancial contingent-claims whose values are dependent on some other underlying ﬁnancial assets. In the literature the preferred choice of estimator is usually maximum likelihood (ML). ML has strong asymptotic justiﬁcation but is not necessarily the best method in ﬁnite samples. The present paper proposes instead a simulation-based method that improves the ﬁnite sample performance of the ML estimator while maintaining its good asymptotic properties. The methods are implemented and evaluated here in the Black-Scholes option pricing model and in the Vasicek bond pricing model, but have wider applicability. Monte Carlo studies show that the proposed procedures achieve bias reductions over ML estimation in pricing contingent claims. The bias reductions are sometimes accompanied by reductions in variance, leading to signiﬁcant overall gains in mean squared estimation error. Empirical applications to US treasury bills highlight the diﬀerences between the bond prices implied by the simulation-based approach and those delivered by ML. Some consequences for the statistical testing of contingent-claim pricing models are discussed.
This paper overviews maximum likelihood and Gaussian methods of estimating continuous time models used in ﬁnance. Since the exact likelihood can be constructed only in special cases, much attention has been devoted to the development of methods designed to approximate the likelihood. These approaches range from crude Euler-type approximations and higher order stochastic Taylor series expansions to more complex polynomial-based expansions and inﬁll approximations to the likelihood based on a continuous time data record. The methods are discussed, their properties are outlined and their relative ﬁnite sample performance compared in a simulation experiment with the nonlinear CIR diﬀusion model, which is popular in empirical ﬁnance. Bias correction methods are also considered and particular attention is given to jackknife and indirect inference estimators. The latter retains the good asymptotic properties of ML estimation while removing ﬁnite sample bias. This method demonstrates superior performance in ﬁnite samples.
It is well-known that maximum likelihood (ML) estimation of the autoregressive parameter of a dynamic panel data model with ﬁxed eﬀects is inconsistent under ﬁxed time series sample size (T) and large cross section sample size (N) asymptotics. The estimation bias is particularly relevant in practical applications when T is small and the autoregressive parameter is close to unity. The present paper proposes a general, computationally inexpensive method of bias reduction that is based on indirect inference (Gouriéroux et al., 1993), shows unbiasedness and analyzes eﬀiciency. The method is implemented in a simple linear dynamic panel model, but has wider applicability and can, for instance, be easily extended to more complicated frameworks such as nonlinear models. Monte Carlo studies show that the proposed procedure achieves substantial bias reductions with only mild increases in variance, thereby substantially reducing root mean square errors. The method is compared with certain consistent estimators and bias-corrected ML estimators previously proposed in the literature and is shown to have superior ﬁnite sample properties to GMM and the bias-corrected ML of Hahn and Kuersteiner (2002). Finite sample performance is compared with that of a recent estimator proposed by Han and Phillips (2005).