We study mechanism design for a sophisticated agent with non-expected utility (EU)
preferences. We show that the revelation principle holds if and only if all types are EU
maximizers: if at least one type is a non-EU maximizer, randomizing over dynamic
mechanisms generates a strictly larger set of implementable allocations than using static
mechanisms. Moreover, dynamic stochastic mechanisms can fully extract the private
information of any type who doesn’t have uniformly quasi-concave preferences without
providing that type any rent. Full-surplus extraction is possible in a broad variety of
non-EU environments, but impossible for types with concave preferences.
We study mechanism design in environments where agents have private preferences and private information about a common payoff-relevant state. In such settings with multi-dimensional types, standard mechanisms fail to implement efficient allocations. We address this limitation by proposing data-driven mechanisms that condition transfers on additional post-allocation information, modeled as an estimator of the payoff-relevant state. Our mechanisms extend the classic Vickrey–Clarke–Groves framework. We show they achieve exact implementation in posterior equilibrium when the state is fully revealed or utilities are affine in an unbiased estimator. With a consistent estimator, they achieve approximate implementation that converges to exact implementation as the estimator converges, and we provide bounds on the convergence rate. We demonstrate applications to digital advertising auctions and AI shopping assistants, where user engagement naturally reveals relevant information, and to procurement auctions with consumer spot markets, where additional information arises from a pricing game played by the same agents.
We propose a new formulation of the maximum score estimator that uses compositions of rectified linear unit (ReLU) functions, instead of indicator functions as in Manski (1975, 1985), to encode the sign alignment restrictions. Since the ReLU function is Lipschitz, our new ReLU-based maximum score criterion function is substantially easier to optimize using standard gradient-based optimization pacakges. We also show that our ReLU-based maximum score (RMS) estimator can be generalized to an umbrella framework defined by multi-index single-crossing (MISC) conditions, while the original maximum score estimator cannot be applied. We establish the n −s/(2s+1) convergence rate and asymptotic normality for the RMS estimator under order-s Holder smoothness. In addition, we propose an alternative estimator using a further reformulation of RMS as a special layer in a deep neural network (DNN) architecture, which allows the estimation procedure to be implemented via state-of-the-art software and hardware for DNN.
We develop a methodology for modeling household income processes when subjective probabilistic assessments of future income are available. This allows us to flexibly estimate conditional cdf s directly using elicited individual subjective probabilities, and to obtain empirical measurements of subjective risk and subjective persistence. We then use two longitudinal surveys collected in rural India and rural Colombia to explore the nature of perceived income dynamics in those contexts. Our results suggest linear income processes are rejected in favor of more flexible versions in both cases; subjective income distributions feature heteroskedasticity, conditional skewness and nonlinear persistence.
How should a buyer design procurement mechanisms when suppliers’ costs are unknown, and the buyer does not have a prior belief? We demonstrate that notably simple mechanisms—those that share a constant fraction of the buyer utility with the seller—allow the buyer to realize a guaranteed positive fraction of the efficient social surplus across all possible costs. Moreover, a judicious choice of the share based on the known demand maximizes the surplus ratio guarantee that can be attained across all possible (arbitrarily complex and nonlinear) mechanisms and cost functions. Results apply to related nonlinear pricing and optimal regulation problems.
The 1996 US welfare reform introduced time limits on welfare receipt. We use quasi-experimental evidence and a rich life-cycle model to understand the impact of time limits on different margins of behavior and well-being. We stress the impact of marital status and marital transitions on mitigating the cost and impact of time limits. Time limits cause women to defer claiming in anticipation of future needs and to work more, effects that depend on the probabilities of marriage and divorce. They also cause an increase in employment among single mothers and reduce divorce, but their introduction costs women 0.7% of lifetime consumption, gross of the redistribution of government savings.
This paper proposes a novel framework for the global optimization of a continuous function in a bounded rectangular domain. Specifically, we show that: (1) global optimization is equivalent to optimal strategy formation in a two-armed decision problem with known distributions, based on the Strategic Law of Large Numbers we establish; and (2) a sign-based strategy based on the solution of a parabolic PDE is asymptotically optimal. Motivated by this result, we propose a class of Strategic Monte Carlo Optimization (SMCO) algorithms, which uses a simple strategy that makes coordinate-wise two-armed decisions based on the signs of the partial gradient (or practically the first difference) of the objective function, without the need of solving PDEs. While this simple strategy is not generally optimal, it is sufficient for our SMCO algorithm to converge to a local optimizer from a single starting point, and to a global optimizer under a growing set of starting points. Numerical studies demonstrate the suitability of our SMCO algorithms for global optimization well beyond the theoretical guarantees established herein. For a wide range of test functions with challenging landscapes (multi-modal, non-differentiable and discontinuous), our SMCO algorithms perform robustly well, even in high-dimensional (d = 200 ∼ 1000) settings. In fact, our algorithms outperform many state-of-the-art global optimizers, as well as local algorithms augmented with the same set of starting points as ours.
We study how privacy regulation affects menu pricing by a monopolist platform that collects and monetizes personal data. Consumers differ in privacy valuation and sophistication: naive users ignore privacy losses, while sophisticated users internalize them. The platform designs prices and data collection options to screen users. Without regulation, privacy allocations are distorted and naive users are exploited. Regulation through privacy-protecting defaults can create a market for information by inducing payments for data; hard caps on data collection protect naive users but may restrict efficient data trade.
For a long time, the majority of economists doing empirical work relied on choice data, while
data based on answers to hypothetical questions, stated preferences or measures of subjective
beliefs were met with some skepticism. Although this has changed recently, much work needs
to be done. In this paper, we emphasize the identifying content of new economic measures. In
the first part of the paper, we discuss where the literature on measures in economics stands at
the moment. We first consider how the design and use of new measures can help identify causal
links and structural parameters under weaker assumptions than those required by approaches
based exclusively on choice data. We then discuss how the availability of new measures can
allow the study of richer models of human behavior that incorporate a wide set of factors. In
the second part of the paper, we illustrate these issues with an application to the study of risk
sharing and of deviations from perfect risk sharing.
This paper develops probability pricing, extending cash flow pricing to quantify the willingness-to-pay for changes in probabilities. We show that the value of any marginal change in probabilities can be expressed as a standard asset-pricing formula with hypothetical cash flows derived from changes in the survival function. This equivalence between probability and cash flow valuation allows us to construct hedging strategies and systematically decompose individual and aggregate willingness-to-pay. Four applications examine the valuation of changes in the distribution of aggregate consumption, the efficiency effects of varying performance precision in principal-agent problems, and the welfare implications of public and private information.
Collective bargaining agreements (CBAs) specify the contractual rights of unionized workers, but their full legal content has not yet been analyzed by economists. This paper develops novel natural language methods to analyze the empirical determinants and economic value of these rights using a new collection of 30,000 CBAs from Canada in the period 1986-2015. We parse legally binding rights (e.g., “workers shall receive. . . ”) and obligations (e.g., “the employer shall provide. . . ”) from contract text, and validate our measures through evaluation of clause pairs and comparison to firm surveys on HR practices. Using timevarying province-level variation in labor income tax rates, we find that higher taxes increase the share of worker-rights clauses while reducing pre-tax wages in unionized firms, consistent with a substitution effect away from taxed wages toward untaxed rights. Further, an exogenous increase in the value of outside options (from a leave-one-out instrument for labor demand) increases the share of worker rights clauses in CBAs. Combining the regression estimates, we infer that a one-standard-deviation increase in worker rights is valued at about 5.7% of wages.
This paper studies nonparametric local (over-)identification, in the sense of Chen and Santos (2018), and the associated semiparametric efficiency in modern causal frameworks. We develop a unified approach that begins by translating structural models with latent variables into their induced statistical models of observables and then analyzes local overidentification through conditional moment restrictions. We apply this approach to three leading models: (i) the general treatment model under unconfoundedness, (ii) the negative control model, and (iii) the long-term causal inference model under unobserved confounding. The first design yields a locally just-identified statistical model, implying that all regular asymptotically linear estimators of the treatment effect share the same asymptotic variance, equal to the (trivial) semiparametric efficiency bound. In contrast, the latter two models involve nonparametric endogeneity and are naturally locally overidentified; consequently, some doubly robust orthogonal moment estimators of the average treatment effect are inefficient. Whereas existing work typically imposes strong conditions to restore just-identification before deriving the efficiency bound, we relax such assumptions and characterize the general efficiency bound, along with efficient estimators, in the overidentified models (ii) and (iii).
Edgeworth expansions are developed for the finite sample distribution of the least squares estimator in a time series parametric first order autoregression with Hilbert space curves of cross section data. The main result extends to this functional data environment the Edgeworth expansion in the corresponding scalar time series AR(1). In doing so, the results show how function-valued cross section data, and hence general forms of cross section dependence, affect the finite sample distribution of the serial correlation coefficient. Autoregressions with functional fixed effect intercepts are included and the results therefore relate to dynamic panel autoregression with individual effects. The primary impact of the use of high-dimensional curved cross section data is to reduce the variation in scalar regression estimation and provide some improvement in the accuracy of the usual asymptotic approximation to the finite sample distribution. Limit results for the expansions under full cross section dependence matching the scalar time series case and independence matching the dynamic panel case are given as special cases. The findings are supported by numerical computations of the exact distributions and the approximations.
Policies that mandate disclosure of innovative project outcomes aim to increase innovation by limiting wasteful duplicative innovation. Yet, such policies change not only the ex-post information environment but also firms' ex-ante innovation incentives. Firms may slow down their own innovation efforts in anticipation of increased disclosure by others. We examine the innovation-related impacts of the 2017 FDA Final Rule amendment, which mandates disclosure of clinical trial results for pharmaceutical firms. We show that the policy hastened and increased disclosure of results for clinical trials post-completion, but also increased the time to completion of clinical trials, the time between early phases of clinical trials, and delays in development-related investments. We provide evidence consistent with mandated disclosure leading firms to wait to learn from their competitors. Our results suggest that mandating disclosure may slow innovation when there is value to waiting.
We propose SLIM (Stochastic Learning and Inference in overidentified Models), a scalable stochastic approximation framework for nonlinear GMM. SLIM forms iterative updates from independent mini-batches of moments and their derivatives, producing unbiased directions that ensure almost-sure convergence. It requires neither a consistent initial estimator nor global convexity and accommodates both fixed-sample and random-sampling asymptotics. We further develop an optional second-order refinement and inference procedures based on random scaling and plug-in methods, including plug-in, debiased plug-in, and online versions of the Sargan–Hansen J-test tailored to stochastic learning. In Monte Carlo experiments based on a nonlinear EASI demand system with 576 moment conditions, 380 parameters, and n = 105 , SLIM solves the model in under 1.4 hours, whereas full-sample GMM in Stata on a powerful laptop converges only after 18 hours. The debiased plug-in J-test delivers satisfactory finite-sample inference, and SLIM scales smoothly to n = 106.
Faculty: Learn how to share your research with the Cowles community at the links below.