Publication Date: January 2019
We study to what extent information aggregation in social learning environments is robust to slight misperceptions of others’ characteristics (e.g., tastes or risk attitudes). We consider a population of agents who obtain information about the state of the world both from initial private signals and by observing a random sample of other agents’ actions over time, where agents’ actions depend not only on their beliefs about the state but also on their idiosyncratic types. When agents are correct about the type distribution in the population, they learn the true state in the long run. By contrast, our ﬁrst main result shows that even arbitrarily small amounts of misperception can generate extreme breakdowns of information aggregation, wherein the long run all agents incorrectly assign probability 1 to some ﬁxed state of the world, regardless of the true underlying state. This stark discontinuous departure from the correctly speciﬁed benchmark motivates independent analysis of information aggregation under misperception.
Our second main result shows that any misperception of the type distribution gives rise to a speciﬁc failure of information aggregation where agents’ long-run beliefs and behavior vary only coarsely with the state, and we provide systematic predictions for how the nature of misperception shapes these coarse long-run outcomes. Finally, we show that how sensitive information aggregation is to misperception depends on how rich agents’ payoﬀ-relevant uncertainty is. A design implication is that information aggregation can be improved through interventions aimed at simplifying the agents’ learning environment.
Supplement pages: 17
Keywords: Misspecification, Social learning, Information aggregation, FragilityCFDP 2160R