Discussion Paper
Welfare Comparisons for Biased Learning
We study robust welfare comparisons of learning biases, i.e., deviations from correct Bayesian updating. Given a true signal distribution, we deem one bias more harmful than another if it yields lower objective expected payoffs in all decision problems. We characterize this ranking in static (one signal) and dynamic (many signals) settings. While the static characterization compares posteriors signal-by-signal, the dynamic characterization employs an “efficiency index” quantifying the speed of belief convergence.