Skip to main content

Publications

Faculty:  Learn how to share your research with the Cowles community at the links below.

Discussion Paper
Abstract

Industrialization experiences differ substantially across countries. We use a benchmark model of structural change to shed light on the sources of this heterogeneity and, in particular, the phenomenon of premature deindustrialization. Our analysis leads to three key findings. First, benchmark models of structural change robustly generate hump-shaped patterns for the evolution of the industrial sector. Second, heterogeneous patterns of catch-up in sectoral productivities across countries can generate variation in industrialization experiences similar to those found in the data, including premature deindustrialization. Third, differences in the rate of agricultural productivity growth across economies can account for the majority of the variation in peak industrial employment shares.

Discussion Paper
Abstract

This paper studies high-dimensional vector autoregressions (VARs) augmented with common factors that allow for strong cross section dependence. Models of this type provide a convenient mechanism for accommodating the interconnectedness and temporal co-variability that are often present in large dimensional systems. We propose an `1-nuclear-norm regularized estimator and derive non-asymptotic upper bounds for the estimation errors as well as large sample asymptotics for the estimates. A singular value thresholding procedure is used to determine the correct number of factors with probability approaching one. Both the LASSO estimator and the conservative LASSO estimator are employed to improve estimation precision. The conservative LASSO estimates of the non-zero coefficients are shown to be asymptotically equivalent to the oracle least squares estimates. Simulations demonstrate that our estimators perform reasonably well in finite samples given the complex high dimensional nature of the model with multiple unobserved components. In an empirical illustration we apply the methodology to explore the dynamic connectedness in the volatilities of financial asset prices and the transmission of investor fear. The findings reveal that a large proportion of connectedness is due to common factors. Conditional on the presence of these common factors, the results still document remarkable connectedness due to the interactions between the individual variables, thereby supporting a common factor augmented VAR specification.

Discussion Paper
Abstract

Price bubbles in multiple assets are sometimes nearly coincident in occurrence. Such near-coincidence is strongly suggestive of co-movement in the associated asset prices and likely driven by certain factors that are latent in the financial or economic system with common effects across several markets. Can we detect the presence of such common factors at the early stages of their emergence? To answer this question, we build a factor model that includes I(1), mildly explosive, and stationary factors to capture normal,  exuberant, and collapsing phases in such phenomena. The I(1) factor models the primary driving force of market fundamentals. The explosive and stationary factors model latent forces that underlie the formation and destruction of asset price bubbles, which typically exist only for subperiods of the sample. The paper provides an algorithm for testing the presence of and date-stamping the origination and termination of price bubbles determined by latent factors in a large-dimensional system embodying many markets. Asymptotics of the bubble test statistic are given under the null of no common bubbles and the alternative of a common bubble across these markets. We prove consistency of a factor bubble detection process for the origination and termination dates of the common bubble. Simulations show good finite sample performance of the testing algorithm in terms of its successful detection rates. Our methods are applied to real estate markets covering 89 major cities in China over the period January 2003 to March 2013. Results suggest the presence of three common bubble episodes in what are known as China’s Tier 1 and Tier 2  cities over the sample period. There appears to be little evidence of a common bubble in Tier 3 cities. 

Discussion Paper
Abstract

Limit distribution theory in the econometric literature for functional coefficient cointegrating (FCC) regression is shown to be incorrect in important ways, influencing rates of convergence, distributional properties, and practical work. In FCC regression the cointegrating coefficient vector \beta(.) is a function of a covariate z_t. The true limit distribution of the local level kernel estimator of \beta(.) is shown to have multiple forms, each form depending on the bandwidth rate in relation to the sample size n and with an optimal convergence rate of n^{3/4} which is achieved by letting the bandwidth have order 1/n^{1/2}.when z_t is scalar. Unlike stationary regression and contrary to the existing literature on FCC regression, the correct limit theory reveals that component elements from the bias and variance terms in the kernel regression can both contribute to variability in the asymptotics depending on the bandwidth behavior in relation to the sample size. The trade-off between bias and variance that is a common feature of kernel regression consequently takes a different and more complex form in FCC regression whereby balance is achieved via the dual-source of variation in the limit with an associated common convergence rate. The error in the literature arises because the random variability of the bias term has been neglected in earlier research. In stationary regression this random variability is of smaller order and can correctly be neglected in asymptotic analysis but with consequences for finite sample performance. In nonstationary regression, variability typically has larger order due to the nonstationary regressor and its omission leads to deficiencies and partial failure in the asymptotics reported in the literature. Existing results are shown to hold only in scalar covariate FCC regression and only when the bandwidth has order larger than 1/n and smaller than 1/n^{1/2}. The correct results in cases of a multivariate covariate z_t are substantially more complex and are not covered by any existing theory. Implications of the findings for inference, confidence interval construction, bandwidth selection, and stability testing for the functional coefficient are discussed. A novel self-normalized t-ratio statistic is developed which is robust with respect to bandwidth order and persistence in the regressor, enabling improved testing and confidence interval construction. Simulations show superior performance of this robust statistic that corroborate the finite sample relevance of the new limit theory in both stationary and nonstationary regressions.

Discussion Paper
Abstract

This paper examines methods of inference concerning quantile treatment effects (QTEs) in randomized experiments with matched-pairs designs (MPDs). We derive the limit distribution of the QTE estimator under MPDs, highlighting the difficulties that arise in analytical inference due to parameter tuning. We show that the naïve weighted bootstrap fails to approximate the limit distribution of the QTE estimator under MPDs because it ignores the dependence structure within the matched pairs.To address this difficulty we propose two bootstrap methods that can consistently approximate the limit distribution: the gradient bootstrap and the weighted bootstrap of the inverse propensity score weighted (IPW) estimator. The gradient bootstrap is free of tuning parameters but requires knowledge of the pair identities. The weighted bootstrap of the IPW estimator does not require such knowledge but involves one tuning parameter. Both methods are straightforward to implement and able to provide pointwise confidence intervals and uniform confidence bands that achieve exact limiting coverage rates. We demonstrate their finite sample performance using simulations and provide an empirical application to a well-known dataset in microfinance.

Discussion Paper
Abstract

Housing fever is a popular term to describe an overheated housing market or housing price bubble. Like other financial asset bubbles, housing fever can inflict harm on the real economy, as indeed the US housing bubble did in the period following 2006 leading up to the general financial crisis and great recession. One contribution that econometricians can make to minimize the harm created by a housing bubble is to provide a quantitative `thermometer’ for diagnosing ongoing housing fever. Early diagnosis can enable prompt and effective policy action that reduces long term damage to the real economy.  This paper provides a selective review of the relevant literature on econometric methods for identifying housing bubbles together with some new methods of research and an empirical application. We first present a technical definition of a housing bubble that facilitates empirical work and discuss significant difficulties encountered in practical work and the solutions that have been proposed in the past literature. A major challenge in all econometric identification procedures is to assess prices in relation to fundamentals, which requires measurement of fundamentals. One solution to address this challenge is to estimate the fundamental component from an underlying structural relationship involving measurable variables. A second aim of the paper is to improve the estimation accuracy of fundamentals by means of an easy-to-implement reduced-form approach. Since many of the relevant variables that determine fundamentals are nonstationary and interdependent we use the IVX (Phillips and Magdalinos, 2009) method to estimate the reduced-form model to reduce the finite sample bias which arises from highly persistent regressors and endogeneity. The recursive evolving test of Phillips, Shi and Yu (2015) is applied to the estimated non-fundamental component for the identification of speculative bubbles. The new bubble test developed here is referred to as PSY-IVX. An empirical application to the eight Australian capital city housing markets over the period 1999 to 2017 shows that bubble testing results are sensitive to different ways of controlling for fundamentals and highlights the importance of accurate estimation of these housing market fundamentals.

Discussion Paper
Abstract

The Paycheck Protection Program (PPP) extended 669 billion dollars of forgivable loans in an unprecedented effort to support small businesses affected by the COVID-19 crisis. This paper provides evidence that information frictions and the “first-come, first-served” design of the PPP program skewed its resources towards larger firms and may have permanently reduced its effectiveness. Using new daily survey data on small businesses in the U.S., we show that the smallest businesses were less aware of the PPP and less likely to apply. If they did apply, the smallest businesses applied later, faced longer processing times, and were less likely to have their application approved. These frictions may have mattered, as businesses that received aid report fewer layoffs, higher employment, and improved expectations about the future.

Discussion Paper
Abstract

We develop a model of consumer search with spatial learning in which sampling the payoff of one product causes consumers to update their beliefs about the payoffs of other products that are nearby in attribute space. Spatial learning gives rise to path dependence, as each new search decision depends on past experiences through the updating process. We present evidence of spatial learning in data that records online search for digital cameras. Consumers’ search paths tend to converge to the chosen product in attribute space, and consumers take larger steps away from rarely purchased products. We estimate the structural parameters of the model and show that these patterns can be rationalized by our model, but not by a model without spatial learning. Eliminating spatial learning reduces consumer welfare by 12%: cross-product inferences allow consumers to locate better products in a shorter time. Spatial learning has important implications for the power of search intermediaries. We use simulations to show that consumer-optimal product recommendations are that are most informative about other products.

Discussion Paper
Abstract

This paper is focused not on the Internet architecture – as defined by layering, the narrow waist of IP, and other core design principles – but on the Internet infrastructure, as embodied in the technologies and organizations that provide Internet service. In this paper we discuss both the challenges and the opportunities that make this an auspicious time to revisit how we might best structure the Internet’s infrastructure. Currently, the tasks of transit-between-domains and last-mile-delivery are jointly handled by a set of ISPs who interconnect through BGP. In this paper we propose cleanly separating these two tasks. For transit, we propose the creation of a “public option” for the Internet’s core backbone. This public option core, which complements rather than replaces the backbones used by large-scale ISPs, would (i) run an open market for backbone bandwidth so it could leverage links offered by third-parties, and (ii) structure its terms-of-service to enforce network neutrality so as to encourage competition and reduce the advantage of large incumbents.

Canadian Journal of Economics
Abstract

Using microdata of firm exports and international patent activity, we find that Greek innovative exporters, identified by their patent filing activity, have substantially higher export revenues by selling higher quantities rather than charging higher prices. To account for this evidence, we set up a horizontally differentiated product model in which an innovative exporter competes for market share in a destination against many non-innovative rivals. We argue that as the competition among the exporters of the non-innovative product becomes more intense, the innovative firm exports more compared with its non-innovative rivals in more distant markets, a prediction that is empirically confirmed in the dataset for Greek innovative exporters.