CFDP 2235

Stability and Robustness in Misspecified Learning Models

Author(s): 

Publication Date: May 2020

Pages: 59

Abstract: 

We present an approach to analyze learning outcomes in a broad class of misspecified environments, spanning both single-agent and social learning. Our main results provide general criteria to determine—without the need to explicitly analyze learning dynamics—when beliefs in a given environment converge to some long-run belief either locally or globally (i.e., from some or all initial beliefs). The key ingredient underlying these criteria is a novel “prediction accuracy” ordering over subjective models that refines existing comparisons based on Kullback-Leibler divergence. We show that these criteria can be applied, first, to unify and generalize various convergence results in previously studied settings. Second, they enable us to identify and analyze a natural class of environments, including costly information acquisition and sequential social learning, where unlike most settings the literature has focused on so far, long-run beliefs can fail to be robust to the details of the true data generating process or agents’ perception thereof. In particular, even if agents learn the truth when they are correctly specified, vanishingly small amounts of misspecification can lead to extreme failures of learning.

Keywords: Misspecified learning, Stability, Robustness, Berk-Nash equilibrium