# SSRN - CFDP Series

### Evolution of Modeling of the Economics of Global Warming: Changes in the Dice Model, 1992-2017

William D. Nordhaus

Many areas of the natural and social sciences involve complex systems that link together multiple sectors. Integrated assessment models (IAMs) are approaches that integrate knowledge from two or more domains into a single framework, and these are particularly important for climate change. One of the earliest IAMs for climate change was the DICE/RICE family of models, first published in Nordhaus (1992), with the latest version in Nordhaus (2017, 2017a). A difficulty in assessing IAMs is the inability to use standard statistical tests because of the lack of a probabilistic structure. In the absence of statistical tests, the present study examines the extent of revisions of the DICE model over its quarter-century history. The study find that the major revisions have come primarily from the economic aspects of the model, whereas the environmental changes have been much smaller. Particularly sharp revisions have occurred for global output, damages, and the social cost of carbon. These results indicate that the economic projections are the least precise parts of IAMs and deserve much greater study than has been the case up to now, especially careful studies of long-run economic growth (to 2100 and beyond).

Many areas of the natural and social sciences involve complex systems that link together multiple sectors. Integrated assessment models (IAMs) are approaches that integrate knowledge from two or more domains into a single framework, and these are particularly important for climate change. One of the earliest IAMs for climate change was the DICE/RICE family of models, first published in Nordhaus (1992), with the latest version in Nordhaus (2017, 2017a). A difficulty in assessing IAMs is the inability to use standard statistical tests because of the lack of a probabilistic structure. In the absence of statistical tests, the present study examines the extent of revisions of the DICE model over its quarter-century history. The study find that the major revisions have come primarily from the economic aspects of the model, whereas the environmental changes have been much smaller. Particularly sharp revisions have occurred for global output, damages, and the social cost of carbon. These results indicate that the economic projections are the least precise parts of IAMs and deserve much greater study than has been the case up to now, especially careful studies of long-run economic growth (to 2100 and beyond).

Categories: Publications

### Tribute to T. W. Anderson

Peter C. B. Phillips

Professor T.W. Anderson passed away on September 17, 2016 at the age of 98 years after an astonishing career that spanned more than seven decades. Standing at the nexus of the statistics and economics professions, Ted Anderson made enormous contributions to both disciplines, playing a significant role in the birth of modern econometrics with his work on structural estimation and testing in the Cowles Commission during the 1940s, and educating successive generations through his brilliant textbook expositions of time series and multivariate analysis. This article is a tribute to his many accomplishments.

Professor T.W. Anderson passed away on September 17, 2016 at the age of 98 years after an astonishing career that spanned more than seven decades. Standing at the nexus of the statistics and economics professions, Ted Anderson made enormous contributions to both disciplines, playing a significant role in the birth of modern econometrics with his work on structural estimation and testing in the Cowles Commission during the 1940s, and educating successive generations through his brilliant textbook expositions of time series and multivariate analysis. This article is a tribute to his many accomplishments.

Categories: Publications

### John Denis Sargan at the London School of Economics

David F. Hendry and Peter C. B. Phillips

During his period at the LSE from the early 1960s to the mid 1980s, John Denis Sargan rose to international prominence and the LSE emerged as the world’s leading centre for econometrics. Within this context, we examine the life of Denis Sargan, describe his major research accomplishments, recount the work of his many doctoral students, and track this remarkable period that constitutes the Sargan era of econometrics at the LSE.

During his period at the LSE from the early 1960s to the mid 1980s, John Denis Sargan rose to international prominence and the LSE emerged as the world’s leading centre for econometrics. Within this context, we examine the life of Denis Sargan, describe his major research accomplishments, recount the work of his many doctoral students, and track this remarkable period that constitutes the Sargan era of econometrics at the LSE.

Categories: Publications

### Econometric Measurement of Earth's Transient Climate Sensitivity

Peter C. B. Phillips, Thomas Leirvik and Trude Storelvmo

How sensitive is Earth’s climate to a given increase in atmospheric greenhouse gas (GHG) concentrations? This long-standing and fundamental question in climate science was recently analyzed by dynamic panel data methods using extensive spatiotemporal data of global surface temperatures, solar radiation, and GHG concentrations over the last half century to 2010 (Storelvmo et al, 2016). These methods revealed that atmospheric aerosol effects masked approximately one-third of the continental warming due to increasing GHG concentrations over this period, thereby implying greater climate sensitivity to GHGs than previously thought. The present study provides asymptotic theory justifying the use of these methods when there are stochastic process trends in both the global forcing variables, such as GHGs, and station-level trend effects from such sources as local aerosol pollutants. These asymptotics validate confidence interval construction for econometric measures of Earth’s transient climate sensitivity. The methods are applied to observational data and to data generated from three leading global climate models (GCMs) that are sampled spatio-temporally in the same way as the empirical observations. The findings indicate that estimates of transient climate sensitivity produced by these GCMs lie within empirically determined confidence limits but that the GCMs uniformly underestimate the effects of aerosol induced dimming effects. The analysis shows the potential of econometric methods to calibrate GCM performance against observational data and to reveal the respective sensitivity parameters (GHG and non-GHG related) governing GCM temperature trends.

How sensitive is Earth’s climate to a given increase in atmospheric greenhouse gas (GHG) concentrations? This long-standing and fundamental question in climate science was recently analyzed by dynamic panel data methods using extensive spatiotemporal data of global surface temperatures, solar radiation, and GHG concentrations over the last half century to 2010 (Storelvmo et al, 2016). These methods revealed that atmospheric aerosol effects masked approximately one-third of the continental warming due to increasing GHG concentrations over this period, thereby implying greater climate sensitivity to GHGs than previously thought. The present study provides asymptotic theory justifying the use of these methods when there are stochastic process trends in both the global forcing variables, such as GHGs, and station-level trend effects from such sources as local aerosol pollutants. These asymptotics validate confidence interval construction for econometric measures of Earth’s transient climate sensitivity. The methods are applied to observational data and to data generated from three leading global climate models (GCMs) that are sampled spatio-temporally in the same way as the empirical observations. The findings indicate that estimates of transient climate sensitivity produced by these GCMs lie within empirically determined confidence limits but that the GCMs uniformly underestimate the effects of aerosol induced dimming effects. The analysis shows the potential of econometric methods to calibrate GCM performance against observational data and to reveal the respective sensitivity parameters (GHG and non-GHG related) governing GCM temperature trends.

Categories: Publications

### Research Design Meets Market Design: Using Centralized Assignment for Impact Evaluation

Atila Abdulkadiroglu, Joshua D. Angrist, Yusuke Narita and Parag A. Pathak

A growing number of school districts use centralized assignment mechanisms to allocate school seats in a manner that reflects student preferences and school priorities. Many of these assignment schemes use lotteries to ration seats when schools are oversubscribed. The resulting random assignment opens the door to credible quasi-experimental research designs for the evaluation of school effectiveness. Yet the question of how best to separate the lottery-generated variation integral to such designs from non-random preferences and priorities remains open. This paper develops easily-implemented empirical strategies that fully exploit the random assignment embedded in a wide class of mechanisms, while also revealing why seats are randomized at one school but not another. We use these methods to evaluate charter schools in Denver, one of a growing number of districts that combine charter and traditional public schools in a unified assignment system. The resulting estimates show large achievement gains from charter school attendance. Our approach generates efficiency gains over ad hoc methods, such as those that focus on schools ranked first, while also identifying a more representative average causal effect. We also show how to use centralized assignment mechanisms to identify causal effects in models with multiple school sectors.

A growing number of school districts use centralized assignment mechanisms to allocate school seats in a manner that reflects student preferences and school priorities. Many of these assignment schemes use lotteries to ration seats when schools are oversubscribed. The resulting random assignment opens the door to credible quasi-experimental research designs for the evaluation of school effectiveness. Yet the question of how best to separate the lottery-generated variation integral to such designs from non-random preferences and priorities remains open. This paper develops easily-implemented empirical strategies that fully exploit the random assignment embedded in a wide class of mechanisms, while also revealing why seats are randomized at one school but not another. We use these methods to evaluate charter schools in Denver, one of a growing number of districts that combine charter and traditional public schools in a unified assignment system. The resulting estimates show large achievement gains from charter school attendance. Our approach generates efficiency gains over ad hoc methods, such as those that focus on schools ranked first, while also identifying a more representative average causal effect. We also show how to use centralized assignment mechanisms to identify causal effects in models with multiple school sectors.

Categories: Publications

### Zone Pricing in Retail Oligopoly

Brian Adams and Kevin R. Williams

We quantify the welfare effects of zone pricing, or setting common prices across distinct markets, in retail oligopoly. Although monopolists can only increase profits by price discriminating, this need not be true when firms face competition. With novel data covering the retail home improvement industry, we find that Home Depot would benefit from finer pricing but that Lowe’s would prefer coarser pricing. The use of zone pricing softens competition in markets where firms compete, but it shields consumers from higher prices in markets where firms might otherwise exercise market power. Overall, zone pricing produces higher consumer surplus than finer pricing discrimination does.

We quantify the welfare effects of zone pricing, or setting common prices across distinct markets, in retail oligopoly. Although monopolists can only increase profits by price discriminating, this need not be true when firms face competition. With novel data covering the retail home improvement industry, we find that Home Depot would benefit from finer pricing but that Lowe’s would prefer coarser pricing. The use of zone pricing softens competition in markets where firms compete, but it shields consumers from higher prices in markets where firms might otherwise exercise market power. Overall, zone pricing produces higher consumer surplus than finer pricing discrimination does.

Categories: Publications

### The Scope of Sequential Screening with Ex-Post Participation Constraints

Dirk Bergemann, Francisco Castro and Gabriel Y. Weintraub

We study the classic sequential screening problem under ex-post participation constraints. Thus the seller is required to satisfy buyers’ ex-post participation constraints. A leading example is the online display advertising market, in which publishers frequently cannot use up-front fees and instead use transaction-contingent fees. We establish when the optimal selling mechanism is static (buyers are not screened) or dynamic (buyers are screened), and obtain a full characterization of such contracts. We begin by analyzing our model within the leading case of exponential distributions with two types. We provide a necessary and sufficient condition for the optimality of the static contract. If the means of the two types are sufficiently close, then no screening is optimal. If they are sufficiently apart, then a dynamic contract becomes optimal. Importantly, the latter contract randomizes the low type buyer while giving a deterministic allocation to the high type. It also makes the low type worse-off and the high type better-off compared to the contract the seller would offer if he knew the buyer’s type. Our main result establishes a necessary and sufficient condition under which the static contract is optimal for general distributions. We show that when this condition fails, a dynamic contract that randomizes the low type buyer is optimal.

We study the classic sequential screening problem under ex-post participation constraints. Thus the seller is required to satisfy buyers’ ex-post participation constraints. A leading example is the online display advertising market, in which publishers frequently cannot use up-front fees and instead use transaction-contingent fees. We establish when the optimal selling mechanism is static (buyers are not screened) or dynamic (buyers are screened), and obtain a full characterization of such contracts. We begin by analyzing our model within the leading case of exponential distributions with two types. We provide a necessary and sufficient condition for the optimality of the static contract. If the means of the two types are sufficiently close, then no screening is optimal. If they are sufficiently apart, then a dynamic contract becomes optimal. Importantly, the latter contract randomizes the low type buyer while giving a deterministic allocation to the high type. It also makes the low type worse-off and the high type better-off compared to the contract the seller would offer if he knew the buyer’s type. Our main result establishes a necessary and sufficient condition under which the static contract is optimal for general distributions. We show that when this condition fails, a dynamic contract that randomizes the low type buyer is optimal.

Categories: Publications

### Global Collateral: How Financial Innovation Drives Capital Flows and Increases Financial Instability

Ana Fostel, John Geanakoplos and Gregory Phelan

We show that cross-border financial flows arise when countries differ in their abilities to use assets as collateral. Financial integration is a way of sharing scarce collateral. The ability of one country to leverage and tranche assets provides attractive financial contracts to investors in the other country, and general equilibrium effects on prices create opportunities for investors in the financially advanced country to invest abroad. Foreign demand for collateral and for collateral-backed financial promises increases the collateral value of domestic assets, and cheap foreign assets provide attractive returns to investors who do not demand collateral to issue promises. Gross global flows respond dynamically to fundamentals, exporting and amplifying financial volatility.

We show that cross-border financial flows arise when countries differ in their abilities to use assets as collateral. Financial integration is a way of sharing scarce collateral. The ability of one country to leverage and tranche assets provides attractive financial contracts to investors in the other country, and general equilibrium effects on prices create opportunities for investors in the financially advanced country to invest abroad. Foreign demand for collateral and for collateral-backed financial promises increases the collateral value of domestic assets, and cheap foreign assets provide attractive returns to investors who do not demand collateral to issue promises. Gross global flows respond dynamically to fundamentals, exporting and amplifying financial volatility.

Categories: Publications

### Inefficient Liquidity Provision

John Geanakoplos and Kieran James Walsh

We prove that in competitive market economies with no insurance for idiosyncratic risks, agents will always overinvest in illiquid long term assets and underinvest in short term liquid assets. We take as our setting the seminal model of Diamond and Dybvig (1983), who Örst posed the question in a tractable model. We reach such a simple conclusion under mild conditions because we stick to the basic competitive market framework, avoiding the banks and intermediaries that Diamond and Dybvig and others introduced.

We prove that in competitive market economies with no insurance for idiosyncratic risks, agents will always overinvest in illiquid long term assets and underinvest in short term liquid assets. We take as our setting the seminal model of Diamond and Dybvig (1983), who Örst posed the question in a tractable model. We reach such a simple conclusion under mild conditions because we stick to the basic competitive market framework, avoiding the banks and intermediaries that Diamond and Dybvig and others introduced.

Categories: Publications

### Money and Status: How Best to Incentivize Work

Pradeep K. Dubey and John Geanakoplos

Status is greatly valued in the real world, yet it has not received much attention from economic theorists. We examine how the owner of a firm can best combine money and status to get her employees to work hard for the least total cost. We find that she should motivate workers of low skill mostly by status and high skill mostly by money. Moreover, she should do so by using a small number of titles and wage levels. This often results in star wages to the elite performers and, more generally, in wage jumps for small increases in productivity. By analogy, the governance of a society should pay special attention to the status concerns of ordinary citizens, which may often be accomplished by reinforcing suitable social norms.

Status is greatly valued in the real world, yet it has not received much attention from economic theorists. We examine how the owner of a firm can best combine money and status to get her employees to work hard for the least total cost. We find that she should motivate workers of low skill mostly by status and high skill mostly by money. Moreover, she should do so by using a small number of titles and wage levels. This often results in star wages to the elite performers and, more generally, in wage jumps for small increases in productivity. By analogy, the governance of a society should pay special attention to the status concerns of ordinary citizens, which may often be accomplished by reinforcing suitable social norms.

Categories: Publications

### Information Design: A Unified Perspective

Dirk Bergemann and Stephen Morris

Fixing a game with uncertain payoffs, information design identifies the information structure and equilibrium that maximizes the payoff of an information designer. We show how this perspective unifies existing work, including that on communication in games (Myerson (1991)), Bayesian persuasion (Kamenica and Gentzkow (2011)) and some of our own recent work. Information design has a literal interpretation, under which there is a real information designer who can commit to the choice of the best information structure (from her perspective) for a set of participants in a game. We emphasize a metaphorical interpretation, under which the information design problem is used by the analyst to characterize play in the game under many different information structures.

Fixing a game with uncertain payoffs, information design identifies the information structure and equilibrium that maximizes the payoff of an information designer. We show how this perspective unifies existing work, including that on communication in games (Myerson (1991)), Bayesian persuasion (Kamenica and Gentzkow (2011)) and some of our own recent work. Information design has a literal interpretation, under which there is a real information designer who can commit to the choice of the best information structure (from her perspective) for a set of participants in a game. We emphasize a metaphorical interpretation, under which the information design problem is used by the analyst to characterize play in the game under many different information structures.

Categories: Publications

### Optimal Sup-Norm Rates and Uniform Inference on Nonlinear Functionals of Nonparametric IV Regression

Xiaohong Chen and Timothy Christensen

This paper makes several important contributions to the literature about nonparametric instrumental variables (NPIV) estimation and inference on a structural function h0 and its functionals. First, we derive sup-norm convergence rates for computationally simple sieve NPIV (series 2SLS) estimators of h0 and its derivatives. Second, we derive a lower bound that describes the best possible (minimax) sup-norm rates of estimating h0 and its derivatives, and show that the sieve NPIV estimator can attain the minimax rates when h0 is approximated via a spline or wavelet sieve. Our optimal sup-norm rates surprisingly coincide with the optimal root-mean-squared rates for severely ill-posed problems, and are only a logarithmic factor slower than the optimal root-mean-squared rates for mildly ill-posed problems. Third, we use our sup-norm rates to establish the uniform Gaussian process strong approximations and the score bootstrap uniform confidence bands (UCBs) for collections of nonlinear functionals of h0 under primitive conditions, allowing for mildly and severely ill-posed problems. Fourth, as applications, we obtain the first asymptotic pointwise and uniform inference results for plug-in sieve t-statistics of exact consumer surplus (CS) and deadweight loss (DL) welfare functionals under low-level conditions when demand is estimated via sieve NPIV. Empiricists could read our real data application of UCBs for exact CS and DL functionals of gasoline demand that reveals interesting patterns and is applicable to other markets.

This paper makes several important contributions to the literature about nonparametric instrumental variables (NPIV) estimation and inference on a structural function h0 and its functionals. First, we derive sup-norm convergence rates for computationally simple sieve NPIV (series 2SLS) estimators of h0 and its derivatives. Second, we derive a lower bound that describes the best possible (minimax) sup-norm rates of estimating h0 and its derivatives, and show that the sieve NPIV estimator can attain the minimax rates when h0 is approximated via a spline or wavelet sieve. Our optimal sup-norm rates surprisingly coincide with the optimal root-mean-squared rates for severely ill-posed problems, and are only a logarithmic factor slower than the optimal root-mean-squared rates for mildly ill-posed problems. Third, we use our sup-norm rates to establish the uniform Gaussian process strong approximations and the score bootstrap uniform confidence bands (UCBs) for collections of nonlinear functionals of h0 under primitive conditions, allowing for mildly and severely ill-posed problems. Fourth, as applications, we obtain the first asymptotic pointwise and uniform inference results for plug-in sieve t-statistics of exact consumer surplus (CS) and deadweight loss (DL) welfare functionals under low-level conditions when demand is estimated via sieve NPIV. Empiricists could read our real data application of UCBs for exact CS and DL functionals of gasoline demand that reveals interesting patterns and is applicable to other markets.

Categories: Publications

### A Note on Optimal Inference in the Linear IV Model

Donald W. K. Andrews, Vadim Marmer and Zhengfei Yu

This paper considers tests and confidence sets (CSs) concerning the coefficient on the endogenous variable in the linear IV regression model with homoskedastic normal errors and one right-hand side endogenous variable. The paper derives a finite-sample lower bound function for the probability that a CS constructed using a two-sided invariant similar test has infinite length and shows numerically that the conditional likelihood ratio (CLR) CS of Moreira (2003) is not always "very close" to this lower bound function. This implies that the CLR test is not always very close to the two-sided asymptotically-efficient (AE) power envelope for invariant similar tests of Andrews, Moreira, and Stock (2006) (AMS). On the other hand, the paper establishes the finite-sample optimality of the CLR test when the correlation between the structural and reduced-form errors, or between the two reduced-form errors, goes to 1 or -1 and other parameters are held constant, where optimality means achievement of the two-sided AE power envelope of AMS. These results cover the full range of (non-zero) IV strength. The paper investigates in detail scenarios in which the CLR test is not on the two-sided AE power envelope of AMS. Also, the paper shows via theory and numerical work that the CLR test is close to having greatest average power, where the average is over a grid of concentration parameter values and over pairs alternative hypothesis values of the parameter of interest, uniformly over pairs of alternative hypothesis values and uniformly over the correlation between the structural and reduced-form errors. The paper concludes that, although the CLR test is not always very close to the two-sided AE power envelope of AMS, CLR tests and CS's have very good overall properties.

This paper considers tests and confidence sets (CSs) concerning the coefficient on the endogenous variable in the linear IV regression model with homoskedastic normal errors and one right-hand side endogenous variable. The paper derives a finite-sample lower bound function for the probability that a CS constructed using a two-sided invariant similar test has infinite length and shows numerically that the conditional likelihood ratio (CLR) CS of Moreira (2003) is not always "very close" to this lower bound function. This implies that the CLR test is not always very close to the two-sided asymptotically-efficient (AE) power envelope for invariant similar tests of Andrews, Moreira, and Stock (2006) (AMS). On the other hand, the paper establishes the finite-sample optimality of the CLR test when the correlation between the structural and reduced-form errors, or between the two reduced-form errors, goes to 1 or -1 and other parameters are held constant, where optimality means achievement of the two-sided AE power envelope of AMS. These results cover the full range of (non-zero) IV strength. The paper investigates in detail scenarios in which the CLR test is not on the two-sided AE power envelope of AMS. Also, the paper shows via theory and numerical work that the CLR test is close to having greatest average power, where the average is over a grid of concentration parameter values and over pairs alternative hypothesis values of the parameter of interest, uniformly over pairs of alternative hypothesis values and uniformly over the correlation between the structural and reduced-form errors. The paper concludes that, although the CLR test is not always very close to the two-sided AE power envelope of AMS, CLR tests and CS's have very good overall properties.

Categories: Publications

### Uniform Inference in Panel Autoregression

John C. Chao and Peter C. B. Phillips

This paper considers estimation and inference concerning the autoregressive coefficient (ρ) in a panel autoregression for which the degree of persistence in the time dimension is unknown. The main objective is to construct confidence intervals for ρ that are asymptotically valid, having asymptotic coverage probability at least that of the nominal level uniformly over the parameter space. It is shown that a properly normalized statistic based on the Anderson-Hsiao IV procedure, which we call the M statistic, is uniformly convergent and can be inverted to obtain asymptotically valid interval estimates. In the unit root case confidence intervals based on this procedure are unsatisfactorily wide and uninformative. To sharpen the intervals a new procedure is developed using information from unit root pretests to select alternative confidence intervals. Two sequential tests are used to assess how close ρ is to unity and to correspondingly tailor intervals near the unit root region. When ρ is close to unity, the width of these intervals shrinks to zero at a faster rate than that of the confidence interval based on the M statistic. Only when both tests reject the unit root hypothesis does the construction revert to the M statistic intervals, whose width has the optimal N^{-1/2}T^{-1/2} rate of shrinkage when the underlying process is stable. The asymptotic properties of this pretest-based procedure show that it produces confidence intervals with at least the prescribed coverage probability in large samples. Simulations confirm that the proposed interval estimation methods perform well in finite samples and are easy to implement in practice. A supplement to the paper provides an extensive set of new results on the asymptotic behavior of panel IV estimators in weak instrument settings.

This paper considers estimation and inference concerning the autoregressive coefficient (ρ) in a panel autoregression for which the degree of persistence in the time dimension is unknown. The main objective is to construct confidence intervals for ρ that are asymptotically valid, having asymptotic coverage probability at least that of the nominal level uniformly over the parameter space. It is shown that a properly normalized statistic based on the Anderson-Hsiao IV procedure, which we call the M statistic, is uniformly convergent and can be inverted to obtain asymptotically valid interval estimates. In the unit root case confidence intervals based on this procedure are unsatisfactorily wide and uninformative. To sharpen the intervals a new procedure is developed using information from unit root pretests to select alternative confidence intervals. Two sequential tests are used to assess how close ρ is to unity and to correspondingly tailor intervals near the unit root region. When ρ is close to unity, the width of these intervals shrinks to zero at a faster rate than that of the confidence interval based on the M statistic. Only when both tests reject the unit root hypothesis does the construction revert to the M statistic intervals, whose width has the optimal N^{-1/2}T^{-1/2} rate of shrinkage when the underlying process is stable. The asymptotic properties of this pretest-based procedure show that it produces confidence intervals with at least the prescribed coverage probability in large samples. Simulations confirm that the proposed interval estimation methods perform well in finite samples and are easy to implement in practice. A supplement to the paper provides an extensive set of new results on the asymptotic behavior of panel IV estimators in weak instrument settings.

Categories: Publications

### Weak σ- Convergence: Theory and Applications

Jianning Kong, Peter C. B. Phillips and Donggyu Sul

The concept of relative convergence, which requires the ratio of two time series to converge to unity in the long run, explains convergent behavior when series share commonly divergent stochastic or deterministic trend components. Relative convergence of this type does not necessarily hold when series share common time decay patterns measured by evaporating rather than divergent trend behavior. To capture convergent behavior in panel data that do not involve stochastic or divergent deterministic trends, we introduce the notion of weak σ-convergence, whereby cross section variation in the panel decreases over time. The paper formalizes this concept and proposes a simple-to-implement linear trend regression test of the null of no σ-convergence. Asymptotic properties for the test are developed under general regularity conditions and various data generating processes. Simulations show that the test has good size control and discriminatory power. The method is applied to examine whether the idiosyncratic components of 90 disaggregate personal consumption expenditure (PCE) price index items σ-converge over time. We find strong evidence of weak σ-convergence in the period after 1992, which implies that cross sectional dependence has strenthened over the last two decades. In a second application, the method is used to test whether experimental data in ultimatum games converge over successive rounds, again finding evidence in favor of weak σ-convergence. A third application studies convergence and divergence in US States unemployment data over the period 2001-2016.

The concept of relative convergence, which requires the ratio of two time series to converge to unity in the long run, explains convergent behavior when series share commonly divergent stochastic or deterministic trend components. Relative convergence of this type does not necessarily hold when series share common time decay patterns measured by evaporating rather than divergent trend behavior. To capture convergent behavior in panel data that do not involve stochastic or divergent deterministic trends, we introduce the notion of weak σ-convergence, whereby cross section variation in the panel decreases over time. The paper formalizes this concept and proposes a simple-to-implement linear trend regression test of the null of no σ-convergence. Asymptotic properties for the test are developed under general regularity conditions and various data generating processes. Simulations show that the test has good size control and discriminatory power. The method is applied to examine whether the idiosyncratic components of 90 disaggregate personal consumption expenditure (PCE) price index items σ-converge over time. We find strong evidence of weak σ-convergence in the period after 1992, which implies that cross sectional dependence has strenthened over the last two decades. In a second application, the method is used to test whether experimental data in ultimatum games converge over successive rounds, again finding evidence in favor of weak σ-convergence. A third application studies convergence and divergence in US States unemployment data over the period 2001-2016.

Categories: Publications

### Identification of Nonparametric Simultaneous Equations Models with a Residual Index Structure

Steven Berry and Philip A. Haile

We present new identification results for a class of nonseparable nonparametric simultaneous equations models introduced by Matzkin (2008). These models combine traditional exclusion restrictions with a requirement that each structural error enter through a “residual index.” Our identification results are constructive and encompass a range of special cases with varying demands on the exogenous variation provided by instruments and the shape of the joint density of the structural errors. The most important of these results demonstrate identification even when instruments have limited variation. A genericity result demonstrates a formal sense in which the associated density conditions may be viewed as mild, even when instruments vary only over a small open ball.

We present new identification results for a class of nonseparable nonparametric simultaneous equations models introduced by Matzkin (2008). These models combine traditional exclusion restrictions with a requirement that each structural error enter through a “residual index.” Our identification results are constructive and encompass a range of special cases with varying demands on the exogenous variation provided by instruments and the shape of the joint density of the structural errors. The most important of these results demonstrate identification even when instruments have limited variation. A genericity result demonstrates a formal sense in which the associated density conditions may be viewed as mild, even when instruments vary only over a small open ball.

Categories: Publications

### The Incidence of Carbon Taxes in U.S. Manufacturing: Lessons from Energy Cost Pass-Through

Sharat Ganapati, Joseph S. Shapiro and Reed Walker

This paper estimates how increases in production costs due to energy inputs affect consumer versus producer surplus (i.e., incidence). In doing so, we develop a general methodology to measure the incidence of changes in input costs that can account for three first-order issues: factor substitution amongst inputs used for production, incomplete pass-through of input costs, and industry competitiveness. We apply this methodology to a set of U.S. manufacturing industries for which we observe plant-level output prices and input costs. We find that about 70 percent of energy price-driven changes in input costs are passed through to consumers. This implies that the share of welfare cost borne by consumers is 25-75 percent smaller (and the share borne by producers is correspondingly larger) than most existing work assumes.

This paper estimates how increases in production costs due to energy inputs affect consumer versus producer surplus (i.e., incidence). In doing so, we develop a general methodology to measure the incidence of changes in input costs that can account for three first-order issues: factor substitution amongst inputs used for production, incomplete pass-through of input costs, and industry competitiveness. We apply this methodology to a set of U.S. manufacturing industries for which we observe plant-level output prices and input costs. We find that about 70 percent of energy price-driven changes in input costs are passed through to consumers. This implies that the share of welfare cost borne by consumers is 25-75 percent smaller (and the share borne by producers is correspondingly larger) than most existing work assumes.

Categories: Publications

### Consequences of the Clean Water Act and the Demand for Water Quality

David Andrew Keiser and Joseph S. Shapiro

Since the 1972 U.S. Clean Water Act, government and industry have invested over $1 trillion to abate water pollution, or $100 per person-year. Over half of U.S. stream and river miles, however, still violate pollution standards. We use the most comprehensive set of files ever compiled on water pollution and its determinants, including 50 million pollution readings from 170,000 monitoring sites, to study water pollution’s trends, causes, and welfare consequences. We have three main findings. First, water pollution concentrations have fallen substantially since 1972, though were declining at faster rates before then. Second, the Clean Water Act’s grants to municipal wastewater treatment plants caused some of these declines. Third, the grants’ estimated effects on housing values are generally smaller than the grants’ costs.

Since the 1972 U.S. Clean Water Act, government and industry have invested over $1 trillion to abate water pollution, or $100 per person-year. Over half of U.S. stream and river miles, however, still violate pollution standards. We use the most comprehensive set of files ever compiled on water pollution and its determinants, including 50 million pollution readings from 170,000 monitoring sites, to study water pollution’s trends, causes, and welfare consequences. We have three main findings. First, water pollution concentrations have fallen substantially since 1972, though were declining at faster rates before then. Second, the Clean Water Act’s grants to municipal wastewater treatment plants caused some of these declines. Third, the grants’ estimated effects on housing values are generally smaller than the grants’ costs.

Categories: Publications

### Narrative Economics

Robert J. Shiller

This address considers the epidemiology of narratives relevant to economic fluctuations. The human brain has always been highly tuned towards narratives, whether factual or not, to justify ongoing actions, even such basic actions as spending and investing. Stories motivate and connect activities to deeply felt values and needs. Narratives “go viral” and spread far, even worldwide, with economic impact. The 1920-21 Depression, the Great Depression of the 1930s, the so-called “Great Recession” of 2007-9 and the contentious political-economic situation of today, are considered as the results of the popular narratives of their respective times. Though these narratives are deeply human phenomena that are difficult to study in a scientific manner, quantitative analysis may help us gain a better understanding of these epidemics in the future.

This address considers the epidemiology of narratives relevant to economic fluctuations. The human brain has always been highly tuned towards narratives, whether factual or not, to justify ongoing actions, even such basic actions as spending and investing. Stories motivate and connect activities to deeply felt values and needs. Narratives “go viral” and spread far, even worldwide, with economic impact. The 1920-21 Depression, the Great Depression of the 1930s, the so-called “Great Recession” of 2007-9 and the contentious political-economic situation of today, are considered as the results of the popular narratives of their respective times. Though these narratives are deeply human phenomena that are difficult to study in a scientific manner, quantitative analysis may help us gain a better understanding of these epidemics in the future.

Categories: Publications

### The Benefit of Collective Reputation

Zvika Neeman, Aniko Oery and Jungju Yu

We study a model of collective reputation and use it to analyze the benefit of collective brands. Consumers form beliefs about the quality of an experience good that is produced by one firm that is part of a collective brand. Consumers’ limited ability to distinguish among firms in the collective and to monitor firms’ investment decisions creates incentives to free-ride on other firms’ investment efforts. Nevertheless, we show that collective brands induce stronger incentives to invest in quality than individual brands under two types of circumstances: if the main concern is with quality control and the baseline reputation of the collective is low, or if the main concern is with the acquisition of specialized knowledge and the baseline reputation of the collective is high. We also contrast the socially optimal information structure with the profit maximizing choice of branding if branding is endogenous. Our results can be applied to country-of-origin, agricultural appellation, and other collective brands.

We study a model of collective reputation and use it to analyze the benefit of collective brands. Consumers form beliefs about the quality of an experience good that is produced by one firm that is part of a collective brand. Consumers’ limited ability to distinguish among firms in the collective and to monitor firms’ investment decisions creates incentives to free-ride on other firms’ investment efforts. Nevertheless, we show that collective brands induce stronger incentives to invest in quality than individual brands under two types of circumstances: if the main concern is with quality control and the baseline reputation of the collective is low, or if the main concern is with the acquisition of specialized knowledge and the baseline reputation of the collective is high. We also contrast the socially optimal information structure with the profit maximizing choice of branding if branding is endogenous. Our results can be applied to country-of-origin, agricultural appellation, and other collective brands.

Categories: Publications