Skip to main content
Yale

Cowles Foundation for Research in Economics

Fostering the development and application of rigorous logical, mathematical, and statistical methods of analysis

Cowles Foundation Discussion Papers

New Cowles Foundation Discussion Papers

Discussion Paper
Abstract

Modern theories of the business cycle do not allow for the simultaneous rational choice of both prices and quantities, instead assuming that an “invisible hand” determines one of these variables to clear markets. In this paper, we develop a macroeconomic framework in which both prices and quantities are chosen directly by firms, and exchange is both voluntary and efficient. Because of uncertainty about demand and productivity, individual product markets can be in excess supply or rationed. The absence of market-clearing changes pricing and production in qualitatively important ways: markups are no longer determined solely by the elasticity of demand, and higher uncertainty reduces production and increases markups. In equilibrium, production in rationed markets has a negative aggregate demand externality on demand in slack markets. Differently from New Keynesian economies, monetary shocks propagate by reducing economic slack, raising aggregate labor productivity and consumption, while uncertainty shocks act as stagflationary cost-push shocks. We integrate our theory of disequilibrium in a dynamic, rational-expectations “New Old Keynesian Model” and demonstrate its implications for the business cycle.

Discussion Paper
Abstract

We develop a framework for the optimal pricing and product design of LLMs in which a provider sells menus of token budgets to users who differ in their valuations across a continuum of tasks. Under a homogeneous production technology, we show that users’ high-dimensional type profiles are summarized by a scalar index, reducing the seller’s problem to one-dimensional screening. The optimal mechanism takes the form of committed-spend contracts: buyers pay for a budget that they allocate across token classes priced at marginal cost. We extend the analysis to environments with multiple differentiated models and to competition between a proprietary leader and an open-source fringe, showing that competitive pressure reshapes both the intensive and extensive margins of compute provision. Each element of our theory (token-budget menus, maximum- and minimum-spend plans, multi-model versioning, and linear API pricing) has a direct counterpart in the observed pricing practices of providers such as Anthropic, OpenAI, and GitHub.

Discussion Paper
Abstract

Many economic parameters are identified by “thin sets” (submanifolds with Lebesgue measure zero) and hence difficult to recover from data in an ambient space. This paper provides a unified theory for estimation and inference of such “thin-set” identified functionals. We show that thin sets are not equally thin: their intrinsic dimensionality m matters in a precise manner. For a nonparametric regression h0 with Hölder smoothness s and d-dimensional covariates in the ambient space, we show that ns2s+d-m, where s is the minimax optimal rate of estimating linear and nonlinear (e.g., quadratic, upper contour) integrals of h0 on an m-dimensional submanifold (0 ≤ m < d), which is the fastest possible attainable rate among all estimators. The minimax lower bound rate result is generalized to estimating submanifold integrals when h0 is a nonparametric density and a nonparametric instrumental variable function. The asymptotic normality of t statistics is established via sieve Riesz representation, and the corresponding inference is computed using Sobol points.

Discussion Paper
Abstract

As AI systems shift from tools to collaborators, a central question is how the skills of humans relying on them change over time. We study this question mathematically by modeling the joint evolution of human skill and AI delegation as a coupled dynamical system. In our model, delegation adapts to relative performance, while skill improves through use and decays under non-use; crucially, both updates arise from optimizing a single performance metric measuring expected task error. Despite this local alignment, adaptive AI use fundamentally alters the global stability structure of human skill acquisition. Beyond the high-skill equilibrium of human-only learning, the system admits a stable low-skill equilibrium corresponding to persistent reliance, separated by a sharp basin boundary that makes early decisions effectively irreversible under the induced dynamics. We further show that AI assistance can strictly improve short-run performance while inducing persistent long-run performance loss relative to the no-AI baseline, driven by a negative feedback between delegation and practice. We characterize how AI quality deforms the basin boundary and show that these effects are robust to noise and asymmetric trust updates. Our results identify stability, not incentives or misalignment, as the central mechanism by which AI assistance can undermine long-run human performance and skill.

Discussion Paper
Abstract

As AI systems enter institutional workflows, workers must decide whether to delegate task execution to AI and how much effort to invest in verifying AI outputs, while institutions evaluate workers using outcome-based standards that may misalign with workers’ private costs. We model delegation and verification as the solution to a rational worker’s optimization problem, and define worker quality by evaluating an institution-centered utility (distinct from the worker’s objective) at the resulting optimal action. We formally characterize optimal worker workflows and show that AI induces phase transitions, where arbitrarily small differences in verification ability lead to sharply different behaviors. As a result, AI can amplify workers with strong verification reliability while degrading institutional worker quality for others who rationally over-delegate and reduce oversight, even when baseline task success improves and no behavioral biases are present. These results identify a structural mechanism by which AI reshapes institutional worker quality and amplifies quality disparities between workers with different verification reliability.

Discussion Paper
Abstract

This paper studies nonparametric local (over-)identification and the semiparametric efficiency in modern causal frameworks. We develop a unified approach that begins by translating structural models with latent variables into their induced statistical models of observables and then analyzes local overidentification through conditional moment restrictions. We apply this approach to three popular classes of causal models: (1) the general treatment model under unconfoundedness; (2) the negative control model, and (3) the long-term causal inference model under unobserved confounding. The first model yields a locally just-identified statistical model, implying that all regular asymptotically linear estimators of the treatment effect have the same asymptotic variance, which equals the (trivial) semiparametric efficient variance bound. In contrast, the latter two models involve nonparametric endogeneity and are naturally locally overidentified; consequently, some doubly robust orthogonal moment estimators of the average treatment effect are inefficient. Whereas existing work typically imposes strong conditions to restore local just-identification to justify the efficiency of their doubly robust orthogonal moment estimators, we characterize the semiparametric efficient variance bounds, along with efficient estimators, for the (locally) overidentified models (2) and (3). A small real data application, along with a simulation study, illustrates the semiparametric efficiency gains in model (3)

Discussion Paper
Abstract

We study how market segmentation affects consumers when a monopolist can adjust both prices and product qualities across segments, engaging in second- and third-degree price discrimination simultaneously. We characterize the consumer-optimal segmentation and show that it has a striking structure: consumers with the same value receive the same quality in every segment, though prices differ. Under mild conditions, any segmentation harms consumers if and only if demand is more elastic than a cost-determined threshold. Hence, potential benefits for consumers depend critically on cost and demand elasticities. These findings have implications for regulatory policy regarding price discrimination and market segmentation.

Discussion Paper
Abstract

Since the late 1980s, extreme poverty has declined sharply, life expectancy and schooling have increased, and electoral democracy has expanded. However poverty reduction has slowed in recent years, particularly following the COVID-19 pandemic, amid intensifying conflict, fragility, climate risks, democratic backsliding, and the erosion of global trends—including trade integration and geopolitical stability—that once supported growth. These dynamics raise three interrelated questions: what barriers impede further progress; where will future growth in lower-income countries come from; and how can growth be broadly shared. Taking stock of 15 chapters forthcoming in Volume 6 of the Handbook of Development Economics, we discuss how external conditions, state capacity and policy choices shape development; analyze the shifting growth drivers, including trade, technology and the rise of services; discuss persistent inequality and distributional tensions; and conjecture that investing in institutions and people pays off.

cowles-foundation-1954

History

In 1932, Alfred Cowles founded the Cowles Commission for Research in Economics in Colorado Springs. The Commission moved to Chicago in 1939, and finally to the Yale Department of Economics in 1954, where it was renamed the Cowles Foundation for Research in Economics.

Our Research Programs

Algorithms, Data, and Market Design

Econometrics

Economic Theory

Industrial Organization

Journal Publications