We analyze a nonlinear pricing model where the seller controls both product pricing (screening) and buyer information about their own values (persuasion). We prove that the optimal mechanism always consists of finitely many signals and items, even with a continuum of buyer values. The seller optimally pools buyer values and reduces product variety to minimize informational rents. We show that value pooling is optimal even for finite value distributions if their entropy exceeds a critical threshold. We also provide sufficient conditions under which the optimal menu restricts offering to a single item.
We propose a new formulation of the maximum score estimator that uses compositions of rectified linear unit (ReLU) functions, instead of indicator functions as in Manski (1975, 1985), to encode the sign alignment restrictions. Since the ReLU function is Lipschitz, our new ReLU-based maximum score criterion function is substantially easier to optimize using standard gradient-based optimization pacakges. We also show that our ReLU-based maximum score (RMS) estimator can be generalized to an umbrella framework defined by multi-index single-crossing (MISC) conditions, while the original maximum score estimator cannot be applied. We establish the n −s/(2s+1) convergence rate and asymptotic normality for the RMS estimator under order-s Holder smoothness. In addition, we propose an alternative estimator using a further reformulation of RMS as a special layer in a deep neural network (DNN) architecture, which allows the estimation procedure to be implemented via state-of-the-art software and hardware for DNN.
In sorting literature, comparative statics for multidimensional assignment models with general output functions and input distributions is an important open question. We provide a complete theory of comparative statics for technological change in general multidimensional assignment models. Our main result is that any technological change is uniquely decomposed into two distinct components. The first component (gradient) gives a characterization of changes in marginal earnings through a Poisson equation. The second component (divergence-free) gives a characterization of labor reallocation. For U.S. data, we quantify equilibrium responses in sorting and earnings with respect to cognitive skill-biased technological change
We study how privacy regulation affects menu pricing by a monopolist platform that collects and monetizes personal data. Consumers differ in privacy valuation and sophistication: naive users ignore privacy losses, while sophisticated users internalize them. The platform designs prices and data collection options to screen users. Without regulation, privacy allocations are distorted and naive users are exploited. Regulation through privacy-protecting defaults can create a market for information by inducing payments for data; hard caps on data collection protect naive users but may restrict efficient data trade.
How should a buyer design procurement mechanisms when suppliers’ costs are unknown, and the buyer does not have a prior belief? We demonstrate that notably simple mechanisms—those that share a constant fraction of the buyer utility with the seller—allow the buyer to realize a guaranteed positive fraction of the efficient social surplus across all possible costs. Moreover, a judicious choice of the share based on the known demand maximizes the surplus ratio guarantee that can be attained across all possible (arbitrarily complex and nonlinear) mechanisms and cost functions. Results apply to related nonlinear pricing and optimal regulation problems.
This paper proposes a novel framework for the global optimization of a continuous function in a bounded rectangular domain. Specifically, we show that: (1) global optimization is equivalent to optimal strategy formation in a two-armed decision problem with known distributions, based on the Strategic Law of Large Numbers we establish; and (2) a sign-based strategy based on the solution of a parabolic PDE is asymptotically optimal. Motivated by this result, we propose a class of Strategic Monte Carlo Optimization (SMCO) algorithms, which uses a simple strategy that makes coordinate-wise two-armed decisions based on the signs of the partial gradient (or practically the first difference) of the objective function, without the need of solving PDEs. While this simple strategy is not generally optimal, it is sufficient for our SMCO algorithm to converge to a local optimizer from a single starting point, and to a global optimizer under a growing set of starting points. Numerical studies demonstrate the suitability of our SMCO algorithms for global optimization well beyond the theoretical guarantees established herein. For a wide range of test functions with challenging landscapes (multi-modal, non-differentiable and discontinuous), our SMCO algorithms perform robustly well, even in high-dimensional (d = 200 ∼ 1000) settings. In fact, our algorithms outperform many state-of-the-art global optimizers, as well as local algorithms augmented with the same set of starting points as ours.
We study how privacy regulation affects menu pricing by a monopolist platform that collects and monetizes personal data. Consumers differ in privacy valuation and sophistication: naive users ignore privacy losses, while sophisticated users internalize them. The platform designs prices and data collection options to screen users. Without regulation, privacy allocations are distorted and naive users are exploited. Regulation through privacy-protecting defaults can create a market for information by inducing payments for data; hard caps on data collection protect naive users but may restrict efficient data trade.
For a long time, the majority of economists doing empirical work relied on choice data, while
data based on answers to hypothetical questions, stated preferences or measures of subjective
beliefs were met with some skepticism. Although this has changed recently, much work needs
to be done. In this paper, we emphasize the identifying content of new economic measures. In
the first part of the paper, we discuss where the literature on measures in economics stands at
the moment. We first consider how the design and use of new measures can help identify causal
links and structural parameters under weaker assumptions than those required by approaches
based exclusively on choice data. We then discuss how the availability of new measures can
allow the study of richer models of human behavior that incorporate a wide set of factors. In
the second part of the paper, we illustrate these issues with an application to the study of risk
sharing and of deviations from perfect risk sharing.
This paper develops probability pricing, extending cash flow pricing to quantify the willingness-to-pay for changes in probabilities. We show that the value of any marginal change in probabilities can be expressed as a standard asset-pricing formula with hypothetical cash flows derived from changes in the survival function. This equivalence between probability and cash flow valuation allows us to construct hedging strategies and systematically decompose individual and aggregate willingness-to-pay. Four applications examine the valuation of changes in the distribution of aggregate consumption, the efficiency effects of varying performance precision in principal-agent problems, and the welfare implications of public and private information.
The 1996 US welfare reform introduced time limits on welfare receipt. We use quasi-experimental evidence and a rich life-cycle model to understand the impact of time limits on different margins of behavior and well-being. We stress the impact of marital status and marital transitions on mitigating the cost and impact of time limits. Time limits cause women to defer claiming in anticipation of future needs and to work more, effects that depend on the probabilities of marriage and divorce. They also cause an increase in employment among single mothers and reduce divorce, but their introduction costs women 0.7% of lifetime consumption, gross of the redistribution of government savings.
Noncarceral conviction is a common outcome of criminal court cases: for every person incarcerated, there are approximately three who were recently convicted but not sentenced to prison or jail. We extend the binary-treatment judge IV framework to settings with multiple treatments and use it to study the consequences of noncarceral conviction. We outline assumptions under which widely used 2SLS regressions recover margin-specific treatment effects, relate these assumptions to models of judge decision-making, and derive an expression that provides intuition about the direction and magnitude of asymptotic bias when a key assumption on judge decision-making is not met. We find that noncarceral conviction (relative to dismissal) leads to a large and long-lasting increase in recidivism for felony defendants in Virginia. In contrast, incarceration (relative to noncarceral conviction) leads to a short-run reduction in recidivism, consistent with incapacitation. Our empirical results suggest that noncarceral felony conviction is an important and overlooked driver of recidivism.
In many real recommender systems, novel items are added frequently over time. The importance of sufficiently presenting novel actions has widely been acknowledged for improving long-term user engagement. A recent work builds on Off-Policy Learning (OPL), which trains a policy from only logged data, however, the existing methods can be unsafe in the presence of novel actions. Our goal is to develop a framework to enforce exploration of novel actions with a guarantee for safety. To this end, we first develop Safe Off-Policy Policy Gradient (Safe OPG), which is a model-free safe OPL method based on a high confidence off-policy evaluation. In our first experiment, we observe that Safe OPG almost always satisfies a safety requirement, even when existing methods violate it greatly. However, the result also reveals that Safe OPG tends to be too conservative, suggesting a difficult tradeoff between guaranteeing safety and exploring novel actions. To overcome this tradeoff, we also propose a novel framework called Deployment-Efficient Policy Learning for Safe User Exploration, which leverages safety margin and gradually relaxes safety regularization during multiple (not many) deployments. Our framework thus enables exploration of novel actions while guaranteeing safe implementation of recommender systems.
Positive assortative matching refers to the tendency of individuals with similar characteristics to form partnerships. Measuring the extent to which assortative matching differs between two economies is challenging when the marginal distributions of the sorting characteristic (e.g., education) change for either or both sexes. We show how the use of different measures can generate different conclusions. We provide an axiomatic characterization for the odds ratio, normalized trace, and likelihood ratio and provide a structural economic interpretation of the odds ratio. We use our approach to show that marital sorting by education substantially changed between the 1950s and the 1970s cohorts.
We propose SLIM (Stochastic Learning and Inference in overidentified Models), a scalable stochastic approximation framework for nonlinear GMM. SLIM forms iterative updates from independent mini-batches of moments and their derivatives, producing unbiased directions that ensure almost-sure convergence. It requires neither a consistent initial estimator nor global convexity and accommodates both fixed-sample and random-sampling asymptotics. We further develop an optional second-order refinement and inference procedures based on random scaling and plug-in methods, including plug-in, debiased plug-in, and online versions of the Sargan–Hansen J-test tailored to stochastic learning. In Monte Carlo experiments based on a nonlinear EASI demand system with 576 moment conditions, 380 parameters, and n = 105 , SLIM solves the model in under 1.4 hours, whereas full-sample GMM in Stata on a powerful laptop converges only after 18 hours. The debiased plug-in J-test delivers satisfactory finite-sample inference, and SLIM scales smoothly to n = 106.
Substantial advances toward global decarbonization have been made in areas such as electricity generation and the electrification of building heat and road transport, yet the decarbonization of energy-intensive industries remains a formidable but crucial challenge. Decarbonization of the industrial sector, whose direct emissions account for about 25% of global carbon dioxide, is essential for transitioning the world economy toward a sustainable growth path. With present technologies and policies, such decarbonization appears technically possible, but difficult and costly. Here, we highlight the pressing need for new lines of research on two emerging frontiers. The first quantifies how industrial decarbonization technologies and policies interact with the broader economy. The second builds on growing data availability and policy experience with industrial decarbonization to provide broad-scale ex post quantifications of its impacts as an essential empirical complement to a largely modeling-based literature to date.
Faculty: Learn how to share your research with the Cowles community at the links below.