Publication Date: January 2019
Revision Date: September 2019
People reason about uncertainty with deliberately incomplete models, including only the most relevant variables. How do people hampered by diﬀerent, incomplete views of the world learn from each other? We introduce a model of “model-based inference.” Model-based reasoners partition an otherwise hopelessly complex state space into a manageable model. We nd that unless the diﬀerences in agents’ models are trivial, interactions will often not lead agents to have common beliefs, and indeed the correct-model belief will typically lie outside the convex hull of the agents’ beliefs. However, if the agents’ models have enough in common, then interacting will lead agents to similar beliefs, even if their models also exhibit some bizarre idiosyncrasies and their information is widely dispersed.
Keywords: Model-Based Reasoning, Information Aggregation
JEL Classification Codes: D8See CFDP Version(s): CFDP 2161