Skip to main content

Publications

Faculty:  Learn how to share your research with the Cowles community at the links below.

Discussion Paper
Abstract

We present an approach to analyze learning outcomes in a broad class of misspecified environments, spanning both single-agent and social learning. We introduce a novel “prediction accuracy” order over subjective models, and observe that this makes it possible to partially restore standard martingale convergence arguments that apply under correctly specified learning. Based on this, we derive general conditions to determine when beliefs in a given environment converge to some long-run belief either locally or globally (i.e., from some or all initial beliefs). We show that these conditions can be applied, first, to unify and generalize various convergence results in previously studied settings. Second, they enable us to analyze environments where learning is “slow,” such as costly information acquisition and sequential social learning. In such environments, we illustrate that even if agents learn the truth when they are correctly specified, vanishingly small amounts of misspecification can generate extreme failures of learning.

Discussion Paper
Abstract

We present an approach to analyze learning outcomes in a broad class of misspecified environments, spanning both single-agent and social learning. We introduce a novel “prediction accuracy” order over subjective models, and observe that this makes it possible to partially restore standard martingale convergence arguments that apply under correctly specified learning. Based on this, we derive general conditions to determine when beliefs in a given environment converge to some long-run belief either locally or globally (i.e., from some or all initial beliefs). We show that these conditions can be applied, first, to unify and generalize various convergence results in previously studied settings. Second, they enable us to analyze environments where learning is “slow,” such as costly information acquisition and sequential social learning. In such environments, we illustrate that even if agents learn the truth when they are correctly specified, vanishingly small amounts of misspecification can generate extreme failures of learning.

Discussion Paper
Abstract

This paper develops a new method informed by data and models to recover information about investor beliefs. Our approach uses information embedded in forward-looking asset prices in conjunction with asset pricing models. We step back from presuming rational expectations and entertain potential belief distortions bounded by a statistical measure of discrepancy. Additionally, our method allows for the direct use of sparse survey evidence to make these bounds more informative. Within our framework, market-implied beliefs may differ from those implied by rational expectations due to behavioral/psychological biases of investors, ambiguity aversion, or omitted permanent components to valuation. Formally, we represent evidence about investor beliefs using a novel nonlinear expectation function deduced using model-implied moment conditions and bounds on statistical divergence. We illustrate our method with a prototypical example from macro-finance using asset market data to infer belief restrictions for macroeconomic growth rates.

Discussion Paper
Abstract

We study price discrimination in a market in which two firms engage in Bertrand competition. Some consumers are contested by both firms, and other consumers are “captive” to one of the firms. The market can be divided into segments, which have different relative shares of captive and contested consumers. It is shown that the revenue-maximizing segmentation involves dividing the market into “nested” markets, where exactly one firm may have captive consumers.