Publication Date: February 2021
We study robust welfare comparisons of learning biases, i.e., deviations from correct Bayesian updating. Given a true signal distribution, we deem one bias more harmful than another if it yields lower objective expected payoﬀs in all decision problems. We characterize this ranking in static (one signal) and dynamic (many signals) settings. While the static characterization compares posteriors signal-by-signal, the dynamic characterization employs an “eﬀiciency index” quantifying the speed of belief convergence. Our results yield welfare-founded quantiﬁcations of the severity of well-documented biases. Moreover, the static and dynamic rankings can conflict, and “smaller” biases can be worse in dynamic settings.