Skip to main content

Publications

Faculty:  Learn how to share your research with the Cowles community at the links below.

Discussion Paper
Abstract

We study to what extent information aggregation in social learning environments is robust to slight misperceptions of others’ characteristics (e.g., tastes or risk attitudes). We consider a population of agents who obtain information about the state of the world both from initial private signals and by observing a random sample of other agents’ actions over time, where agents’ actions depend not only on their beliefs about the state but also on their idiosyncratic types. When agents are correct about the type distribution in the population, they learn the true state in the long run. By contrast, our first main result shows that even arbitrarily small amounts of misperception can generate extreme breakdowns of information aggregation, wherein the long run all agents incorrectly assign probability 1 to some fixed state of the world, regardless of the true underlying state. This stark discontinuous departure from the correctly specified benchmark motivates independent analysis of information aggregation under misperception.
Our second main result shows that any misperception of the type distribution gives rise to a specific failure of information aggregation where agents’ long-run beliefs and behavior vary only coarsely with the state, and we provide systematic predictions for how the nature of misperception shapes these coarse long-run outcomes. Finally, we show that how sensitive information aggregation is to misperception depends on how rich agents’ payoff-relevant uncertainty is. A design implication is that information aggregation can be improved through interventions aimed at simplifying the agents’ learning environment.

Discussion Paper
Abstract

We describe a methodology for making counterfactual predictions in settings where the information held by strategic agents and the distribution of payoff-relevant states of the world are unknown. The analyst observes behavior assumed to be rationalized by a Bayesian model, in which agents maximize expected utility, given partial and differential information about the state. A counterfactual prediction is desired about behavior in another strategic setting, under the hypothesis that the distribution of the state and agents’ information about the state are held fixed. When the data and the desired counterfactual prediction pertain to environments with finitely many states, players, and actions, the counterfactual prediction is described by finitely many linear inequalities, even though the latent parameter, the information structure, is infinite dimensional.

Discussion Paper
Abstract

People reason about uncertainty with deliberately incomplete models, including only the most relevant variables. How do people hampered by different, incomplete views of the world learn from each other? We introduce a model of “model-based inference.” Model-based reasoners partition an otherwise hopelessly complex state space into a manageable model. We nd that unless the differences in agents’ models are trivial, interactions will often not lead agents to have common beliefs, and indeed the correct-model belief will typically lie outside the convex hull of the agents’ beliefs. However, if the agents’ models have enough in common, then interacting will lead agents to similar beliefs, even if their models also exhibit some bizarre idiosyncrasies and their information is widely dispersed.

Discussion Paper
Abstract

Crowds” are often regarded as “wiser” than individuals, and prediction markets are often regarded as effective methods for harnessing this wisdom. If the agents in prediction markets are Bayesians who share a common model and prior belief, then the no-trade theorem implies that we should see no trade in the market. But if the agents in the market are not Bayesians who share a common model and prior belief, then it is no longer obvious that the market outcome aggregates or conveys information. In this paper, we examine a stylized prediction market comprised of Bayesian agents whose inferences are based on different models of the underlying environment. We explore a basic tension—the differences in models that give rise to the possibility of trade generally preclude the possibility of perfect information aggregation.

Discussion Paper
Abstract

I model financial markets that structure decision-making into discrete points separating contract offers, applications, and acceptance/denial decisions. Endogenous beliefs about applicants’ risk types emerge as the institutional process extracts private information allowing uninformed firms to infer risk qualities by comparing applications of many consumers. Endogenous beliefs and low-risk consumer behavior render truthful disclosure of transactions incentive compatible supporting a unique equilibrium robust to cream-skimming and cross-subsidizing deviations, even under Hellwig’s “secret” policy assumption. In equilibrium each type demands low-risk’s optimal pooling policy and high-risk supplement to full coverage at fair-price. Nonpassive consumers’ belief firms are sequentially rational necessary for equilibrium; lemon equilibrium with only high-risk insured possible.