Many mental health disorders start in adolescence, and appropriate initial treatment may improve trajectories. But what is appropriate treatment? We use a large national database of insurance claims to examine the impact of initial mental health treatment on the outcomes of adolescent children over the next 2 years, where treatment is either consistent with US Food and Drug Administration guidelines, consistent with looser guidelines published by professional societies (gray area prescribing), or inconsistent with any guidelines (red-flag prescribing). We find that red-flag prescribing increases self-harm, use of emergency rooms, and health care costs, suggesting that treatment guidelines effectively scale up good treatment in practice.
During adolescence, peer interactions become increasingly central to children’s development, whereas the direct influence of parents wanes. Nevertheless, parents can continue to exert leverage by shaping their children’s peer groups. We construct and estimate a model of parenting with peer and neighborhood effects where parents intervene in peer formation and show that the model captures empirical patterns of skill accumulation, parenting style, and peer characteristics among US high school students. We find that interventions that move children to better neighborhoods lose impact when they are scaled up, because parents’ equilibrium responses push against successful integration with the new peer group.
We analyze a nonlinear pricing model where the seller controls both product pricing (screening) and buyer information about their own values (persuasion). We prove that the optimal mechanism always consists of finitely many signals and items, even with a continuum of buyer values. The seller optimally pools buyer values and reduces product variety to minimize informational rents. We show that value pooling is optimal even for finite value distributions if their entropy exceeds a critical threshold. We also provide sufficient conditions under which the optimal menu restricts offering to a single item.
We propose a new formulation of the maximum score estimator that uses compositions of rectified linear unit (ReLU) functions, instead of indicator functions as in Manski (1975, 1985), to encode the sign alignment restrictions. Since the ReLU function is Lipschitz, our new ReLU-based maximum score criterion function is substantially easier to optimize using standard gradient-based optimization pacakges. We also show that our ReLU-based maximum score (RMS) estimator can be generalized to an umbrella framework defined by multi-index single-crossing (MISC) conditions, while the original maximum score estimator cannot be applied. We establish the n −s/(2s+1) convergence rate and asymptotic normality for the RMS estimator under order-s Holder smoothness. In addition, we propose an alternative estimator using a further reformulation of RMS as a special layer in a deep neural network (DNN) architecture, which allows the estimation procedure to be implemented via state-of-the-art software and hardware for DNN.
In sorting literature, comparative statics for multidimensional assignment models with general output functions and input distributions is an important open question. We provide a complete theory of comparative statics for technological change in general multidimensional assignment models. Our main result is that any technological change is uniquely decomposed into two distinct components. The first component (gradient) gives a characterization of changes in marginal earnings through a Poisson equation. The second component (divergence-free) gives a characterization of labor reallocation. For U.S. data, we quantify equilibrium responses in sorting and earnings with respect to cognitive skill-biased technological change
We study mechanism design for a sophisticated agent with non-expected utility (EU)
preferences. We show that the revelation principle holds if and only if all types are EU
maximizers: if at least one type is a non-EU maximizer, randomizing over dynamic
mechanisms generates a strictly larger set of implementable allocations than using static
mechanisms. Moreover, dynamic stochastic mechanisms can fully extract the private
information of any type who doesn’t have uniformly quasi-concave preferences without
providing that type any rent. Full-surplus extraction is possible in a broad variety of
non-EU environments, but impossible for types with concave preferences.
We develop a methodology for modeling household income processes when subjective probabilistic assessments of future income are available. This allows us to flexibly estimate conditional cdf s directly using elicited individual subjective probabilities, and to obtain empirical measurements of subjective risk and subjective persistence. We then use two longitudinal surveys collected in rural India and rural Colombia to explore the nature of perceived income dynamics in those contexts. Our results suggest linear income processes are rejected in favor of more flexible versions in both cases; subjective income distributions feature heteroskedasticity, conditional skewness and nonlinear persistence.
We study mechanism design in environments where agents have private preferences and private information about a common payoff-relevant state. In such settings with multi-dimensional types, standard mechanisms fail to implement efficient allocations. We address this limitation by proposing data-driven mechanisms that condition transfers on additional post-allocation information, modeled as an estimator of the payoff-relevant state. Our mechanisms extend the classic Vickrey–Clarke–Groves framework. We show they achieve exact implementation in posterior equilibrium when the state is fully revealed or utilities are affine in an unbiased estimator. With a consistent estimator, they achieve approximate implementation that converges to exact implementation as the estimator converges, and we provide bounds on the convergence rate. We demonstrate applications to digital advertising auctions and AI shopping assistants, where user engagement naturally reveals relevant information, and to procurement auctions with consumer spot markets, where additional information arises from a pricing game played by the same agents.
How should a buyer design procurement mechanisms when suppliers’ costs are unknown, and the buyer does not have a prior belief? We demonstrate that notably simple mechanisms—those that share a constant fraction of the buyer utility with the seller—allow the buyer to realize a guaranteed positive fraction of the efficient social surplus across all possible costs. Moreover, a judicious choice of the share based on the known demand maximizes the surplus ratio guarantee that can be attained across all possible (arbitrarily complex and nonlinear) mechanisms and cost functions. Results apply to related nonlinear pricing and optimal regulation problems.
This paper proposes a novel framework for the global optimization of a continuous function in a bounded rectangular domain. Specifically, we show that: (1) global optimization is equivalent to optimal strategy formation in a two-armed decision problem with known distributions, based on the Strategic Law of Large Numbers we establish; and (2) a sign-based strategy based on the solution of a parabolic PDE is asymptotically optimal. Motivated by this result, we propose a class of Strategic Monte Carlo Optimization (SMCO) algorithms, which uses a simple strategy that makes coordinate-wise two-armed decisions based on the signs of the partial gradient (or practically the first difference) of the objective function, without the need of solving PDEs. While this simple strategy is not generally optimal, it is sufficient for our SMCO algorithm to converge to a local optimizer from a single starting point, and to a global optimizer under a growing set of starting points. Numerical studies demonstrate the suitability of our SMCO algorithms for global optimization well beyond the theoretical guarantees established herein. For a wide range of test functions with challenging landscapes (multi-modal, non-differentiable and discontinuous), our SMCO algorithms perform robustly well, even in high-dimensional (d = 200 ∼ 1000) settings. In fact, our algorithms outperform many state-of-the-art global optimizers, as well as local algorithms augmented with the same set of starting points as ours.
We study how privacy regulation affects menu pricing by a monopolist platform that collects and monetizes personal data. Consumers differ in privacy valuation and sophistication: naive users ignore privacy losses, while sophisticated users internalize them. The platform designs prices and data collection options to screen users. Without regulation, privacy allocations are distorted and naive users are exploited. Regulation through privacy-protecting defaults can create a market for information by inducing payments for data; hard caps on data collection protect naive users but may restrict efficient data trade.
For a long time, the majority of economists doing empirical work relied on choice data, while
data based on answers to hypothetical questions, stated preferences or measures of subjective
beliefs were met with some skepticism. Although this has changed recently, much work needs
to be done. In this paper, we emphasize the identifying content of new economic measures. In
the first part of the paper, we discuss where the literature on measures in economics stands at
the moment. We first consider how the design and use of new measures can help identify causal
links and structural parameters under weaker assumptions than those required by approaches
based exclusively on choice data. We then discuss how the availability of new measures can
allow the study of richer models of human behavior that incorporate a wide set of factors. In
the second part of the paper, we illustrate these issues with an application to the study of risk
sharing and of deviations from perfect risk sharing.
This paper develops probability pricing, extending cash flow pricing to quantify the willingness-to-pay for changes in probabilities. We show that the value of any marginal change in probabilities can be expressed as a standard asset-pricing formula with hypothetical cash flows derived from changes in the survival function. This equivalence between probability and cash flow valuation allows us to construct hedging strategies and systematically decompose individual and aggregate willingness-to-pay. Four applications examine the valuation of changes in the distribution of aggregate consumption, the efficiency effects of varying performance precision in principal-agent problems, and the welfare implications of public and private information.
The 1996 US welfare reform introduced time limits on welfare receipt. We use quasi-experimental evidence and a rich life-cycle model to understand the impact of time limits on different margins of behavior and well-being. We stress the impact of marital status and marital transitions on mitigating the cost and impact of time limits. Time limits cause women to defer claiming in anticipation of future needs and to work more, effects that depend on the probabilities of marriage and divorce. They also cause an increase in employment among single mothers and reduce divorce, but their introduction costs women 0.7% of lifetime consumption, gross of the redistribution of government savings.
Noncarceral conviction is a common outcome of criminal court cases: for every person incarcerated, there are approximately three who were recently convicted but not sentenced to prison or jail. We extend the binary-treatment judge IV framework to settings with multiple treatments and use it to study the consequences of noncarceral conviction. We outline assumptions under which widely used 2SLS regressions recover margin-specific treatment effects, relate these assumptions to models of judge decision-making, and derive an expression that provides intuition about the direction and magnitude of asymptotic bias when a key assumption on judge decision-making is not met. We find that noncarceral conviction (relative to dismissal) leads to a large and long-lasting increase in recidivism for felony defendants in Virginia. In contrast, incarceration (relative to noncarceral conviction) leads to a short-run reduction in recidivism, consistent with incapacitation. Our empirical results suggest that noncarceral felony conviction is an important and overlooked driver of recidivism.
Faculty: Learn how to share your research with the Cowles community at the links below.