Skip to main content
Discussion Paper

Common Learning

Consider two agents who learn the value of an unknown parameter by observing a sequence of private signals. The signals are independent and identically distributed across time but not necessarily across agents. We show that that when each agent’s signal space is finite, the agents will commonly learn its value, i.e., that the true value of the parameter will become approximate common-knowledge. In contrast, if the agents’ observations come from a countably infinite signal space, then this contraction mapping property fails.