Skip to main content

Publications

Faculty:  Learn how to share your research with the Cowles community at the links below.

Discussion Paper
Abstract

The Hodrick-Prescott (HP) filter is one of the most widely used econometric methods in applied macroeconomic research. The technique is nonparametric and seeks to decompose a time series into a trend and a cyclical component unaided by economic theory or prior trend specification. Like all nonparametric methods, the HP filter depends critically on a tuning parameter that controls the degree of smoothing. Yet in contrast to modern nonparametric methods and applied work with these procedures, empirical practice with the HP filter almost universally relies on standard settings for the tuning parameter that have been suggested largely by experimentation with macroeconomic data and heuristic reasoning about the form of economic cycles and trends. As recent research has shown, standard settings may not be adequate in removing trends, particularly stochastic trends, in economic data. This paper proposes an easy-to-implement practical procedure of iterating the HP smoother that is intended to make the filter a smarter smoothing device for trend estimation and trend elimination. We call this iterated HP technique the boosted HP filter in view of its connection to L_2-boosting in machine learning. The paper develops limit theory to show that the boosted HP filter asymptotically recovers trend mechanisms that involve unit root processes, deterministic polynomial drifts, and polynomial drifts with structural breaks – the most common trends that appear in macroeconomic data and current modeling methodology. In doing so, the boosted filter provides a new mechanism for consistently estimating multiple structural breaks. A stopping criterion is used to automate the iterative HP algorithm, making it a data-determined method that is ready for modern data-rich environments in economic research. The methodology is illustrated using three real data examples that highlight the differences between simple HP filtering, the data-determined boosted filter, and an alternative autoregressive approach. These examples show that the boosted HP filter is helpful in analyzing a large collection of heterogeneous macroeconomic time series that manifest various degrees of persistence, trend behavior, and volatility. 

Discussion Paper
Abstract We propose a criterion for determining whether a local policy analysis can be made in a given equilibrium in an overlapping generations model. The criterion can be applied to models with infinite past and future as well as those with a truncated past. The equilibrium is not necessarily a steady state; for example, demographic and type composition of the population or individuals’ endowments can change over time. However, asymptotically, the equilibrium should be stationary. The two limiting stationary paths at either end of the timeline do not have to be the same. If they are, conditions for local uniqueness are far more stringent for an economy with a truncated past as compared to its counterpart with an infinite past.    In addition, we illustrate our main result using a text-book model with a single physical good, a two-period life-cycle and a single type of consumer. In this model we show how to calculate a response to a policy change using the implicit function theorem.
Discussion Paper
Abstract

A critical element of word of mouth (WOM) or buzz marketing is to identify seeds, often central actors with high degree in the social network. Seed identification typically requires data on the full network structure, which is often unavailable. We therefore examine the impact of WOM seeding strategies motivated by the friendship paradox to obtain more central nodes without knowing network structure. But higher-degree nodes may communicate less with neighbors; therefore whether friendship paradox motivated seeding strategies increase or reduce WOM and adoption remains an empirical question. We develop and estimate a model of WOM and adoption using data on microfinance adoption across 43 villages in India for which we have data on social networks. Counterfactuals show that the proposed seeding strategies are about 15-20% more effective than random seeding in increasing adoption. Remarkably, they are also about 5-11% more effective than opinion leader seeding, and are relative more effective when we have fewer seeds.

Discussion Paper
Abstract

We propose an instrumental-variable (IV) approach to estimate the causal effect of service satisfaction on customer loyalty, by exploiting a common source of randomness in the assignment of service employees to customers in service queues. Our approach can be applied at no incremental cost by using routine repeated cross-sectional customer survey data collected by firms. The IV approach addresses multiple sources of biases that pose challenges in estimating the causal effect using cross-sectional data: (i) the upward bias from common-method variance due to the joint measurement of service satisfaction and loyalty intent in surveys; (ii) the attenuation bias caused by measurement errors in service satisfaction; and (iii) the omitted-variable bias that may be in either direction. In contrast to the common concern about the upward common-method bias in the estimates using cross-sectional survey data, we find that ordinary-least-squares (OLS) substantially underestimates the casual effect, suggesting that the downward bias due to measurement errors and/or omitted variables is dominant. The underestimation is even more significant with a behavioral measure of loyalty–where there is no common methods bias. This downward bias leads to significant underestimation of the positive profit impact from improving service satisfaction and can lead to under-investment by firms in service satisfaction. Finally, we find that the causal effect of service satisfaction on loyalty is greater for more difficult types of services.

Discussion Paper
Abstract

Commonly used tests to assess evidence for the absence of autocorrelation in a  univariate time series or serial cross-correlation between time series rely on procedures whose validity holds for i.i.d. data. When the series are not i.i.d., the size of correlogram and cumulative Ljung-Box tests can be significantly distorted. This paper adapts standard correlogram and portmanteau tests to accommodate hidden dependence and non-stationarities involving heteroskedasticity, thereby uncoupling these tests from limiting assumptions that reduce their applicability in empirical work. To enhance the Ljung-Box test for non-i.i.d. data a new cumulative test is introduced. Asymptotic size of these tests is unaffected by hidden
dependence and heteroskedasticity in the series. Related extensions are provided for testing cross-correlation at various lags in bivariate time series.  Tests  for the i.i.d. property of a time  series  are also  developed. An extensive Monte Carlo study confirms good performance in both size and power for the new tests. Applications to real data reveal that standard tests frequently produce spurious evidence of serial correlation.

Discussion Paper
Abstract

We investigate the role of training in reducing the gender wage gap using the UK- BHPS which contains detailed records of training. Using policy changes over an 18 year period we identify the impact of training and work experience on wages, earnings and employment. Based on a lifecycle model and using reforms as a source of exogenous variation we evaluate the role of formal training and experience in defining the evolution of wages and employment careers, conditional on education. Training is potentially important in compensating for the effects of children, especially for women who left education after completing high school.