Countries with more democratic political regimes experienced greater GDP loss and more deaths from COVID-19 in 2020. Using five diffferent instrumental variable strategies, we find that democracy is a major cause of the wealth and health losses. This impact is global and is not driven by China and the US alone. A key channel for democracy’s negative impact is weaker and narrower containment policies at the beginning of the outbreak, not the speed of introducing policies.
Democracy is widely believed to contribute to economic growth and public health. However, we find that this conventional wisdom is no longer true and even reversed; democracy has persistent negative impacts on GDP growth since the beginning of this century. This finding emerges from five different instrumental variable strategies. Our analysis suggests that democracies cause slower growth through less investment, less trade, and slower value-added growth in manufacturing and services. For 2020, democracy is also found to cause more deaths from Covid-19.
Algorithms produce a growing portion of decisions and recommendations both in policy and business. Such algorithmic decisions are natural experiments (conditionally quasirandomly assigned instruments) since the algorithms make decisions based only on observable input variables. We use this observation to develop a treatment-effect estimator for a class of stochastic and deterministic algorithms. Our estimator is shown to be consistent and asymptotically normal for well-defined causal effects. A key special case of our estimator is a high-dimensional regression discontinuity design. The proofs use tools from differential geometry and geometric measure theory, which may be of independent interest.
The practical performance of our method is first demonstrated in a high-dimensional simulation resembling decision-making by machine learning algorithms. Our estimator has smaller mean squared errors compared to alternative estimators. We finally apply our estimator to evaluate the effect of Coronavirus Aid, Relief, and Economic Security (CARES) Act, where more than $10 billion worth of relief funding is allocated to hospitals via an algorithmic rule. The estimates suggest that the relief funding has little effect on COVID- 19-related hospital activity levels. Naive OLS and IV estimates exhibit substantial selection bias.
Randomized controlled trials (RCTs) enroll hundreds of millions of subjects and involve many human lives. To improve subjects’ welfare, I propose a design of RCTs that I call Experiment-as-Market (EXAM). EXAM produces a welfare-maximizing allocation of treatment-assignment probabilities, is almost incentive-compatible for preference elicitation, and unbiasedly estimates any causal effect estimable with standard RCTs. I quantify these properties by applying EXAM to a water-cleaning experiment in Kenya. In this empirical setting, compared to standard RCTs, EXAM improves subjects’ predicted well-being while reaching similar treatment-effect estimates with similar precision.
What is the most statistically efficient way to do off-policy optimization with batch data from bandit feedback? For log data generated by contextual bandit algorithms, we consider offline estimators for the expected reward from a counterfactual policy. Our estimators are shown to have lowest variance in a wide class of estimators, achieving variance reduction relative to standard estimators. We then apply our estimators to improve advertisement design by a major advertisement company. Consistent with the theoretical result, our estimators allow us to improve on the existing bandit algorithm with more statistical confidence compared to a state-of-theart benchmark.
Centralized school assignment algorithms must distinguish between applicants with the same preferences and priorities. This is done with randomly assigned lottery numbers, nonlottery tie-breakers like test scores, or both. The New York City public high school match illustrates the latter, using test scores, grades, and interviews to rank applicants to screened schools, combined with lottery tie-breaking at unscreened schools. We show how to identify causal effects of school attendance in such settings. Our approach generalizes regression discontinuity designs to allow for multiple treatments and multiple running variables, some of which are randomly assigned. Lotteries generate assignment risk at screened as well as unscreened schools. Centralized assignment also identifies screened school effects away from screened school cutoffs. These features of centralized assignment are used to assess the predictive value of New York City’s school report cards. Grade A schools improve SAT math scores and increase the likelihood of graduating, though by less than OLS estimates suggest. Selection bias in OLS estimates is egregious for Grade A screened schools.
Many countries face growing concerns that population aging may make voting and policy-making myopic. This concern begs for electoral reform to better reflect voices of the youth, such as weighting votes by voters' life expectancy. This paper predicts the effect of the counterfactual electoral reform on the 2016 U.S. presidential election. Using the American National Election Studies (ANES) data, I find that Hillary Clinton would have won the election if votes were weighted by life expectancy. I also discuss limitations due to data issues.
What is the most statistically efficient way to do off-policy optimization with batch data from bandit feedback? For log data generated by contextual bandit algorithms, we consider offline estimators for the expected reward from a counterfactual policy. Our estimators are shown to have lowest variance in a wide class of estimators, achieving variance reduction relative to standard estimators. We then apply our estimators to improve advertisement design by a major advertisement company. Consistent with the theoretical result, our estimators allow us to improve on the existing bandit algorithm with more statistical confidence compared to a state-of-theart benchmark.
A growing number of school districts use centralized assignment mechanisms to allocate school seats in a manner that reflects student preferences and school priorities. Many of these assignment schemes use lotteries to ration seats when schools are oversubscribed. The resulting random assignment opens the door to credible quasi-experimental research designs for the evaluation of school effectiveness. Yet the question of how best to separate the lottery-generated randomization integral to such designs from non-random preferences and priorities remains open. This paper develops easily-implemented empirical strategies that fully exploit the random assignment embedded in a wide class of mechanisms, while also revealing why seats are randomized at one school but not another. We use these methods to evaluate charter schools in Denver, one of a growing number of districts that combine charter and traditional public schools in a unified assignment system. The resulting estimates show large achievement gains from charter school attendance. Our approach generates efficiency gains over ad hoc methods, such as those that focus on schools ranked first, while also identifying a more representative average causal effect. We also show how to use centralized assignment mechanisms to identify causal effects in models with multiple school sectors.
Many school and college admission systems use centralized mechanisms to allocate seats based on applicant preferences and school priorities. When tie-breaking uses non-randomly assigned criteria like distance or a test score, applicants with the same preferences and priorities are not directly comparable. The non-lottery setting does generate a kind of local random assignment that opens the door to regression discontinuity designs. This paper introduces a hybrid RD/propensity score empirical strategy that exploits quasi-experiments embedded in serial dictatorship, a mechanism widely used for college and selective K-12 school admissions. We use our approach to estimate achievement effects of Chicago's exam schools.
Many centralized school admissions systems use lotteries to ration limited seats at oversubscribed schools. The resulting random assignment is used by empirical researchers to identify the effect of entering a school on outcomes like test scores. I first find that the two most popular empirical research designs may not successfully extract a random assignment of applicants to schools. When do the research designs overcome this problem? I show the following main results for a class of data-generating mechanisms containing those used in practice: One research design extracts a random assignment under a mechanism if and practically only if the mechanism is strategy-proof for schools. In contrast, the other research design does not necessarily extract a random assignment under any mechanism.