Skip to main content
Yale

Cowles Foundation for Research in Economics

Fostering the development and application of rigorous logical, mathematical, and statistical methods of analysis

Cowles Foundation Discussion Papers

New Cowles Foundation Discussion Papers

Discussion Paper
Abstract

We study the design of efficient dynamic recommendation systems, such as AI shopping assistants, in which a platform interacts with a user over multiple rounds to identify the most suitable product among those offered by advertisers. Advertisers have multi-dimensional private information: their private value from a purchase and private information about the user’s preferences. In each round, the platform displays recommendations; the user learns product characteristics of the shown items and then chooses whether to purchase, exit without purchasing, or submit a new query. These actions generate a stream of feedback—purchase, exit, and follow-up queries—that is informative about the user’s preferences and can be used both to refine future recommendations and to design contingent transfers. We introduce a class of data-driven dynamic team mechanisms that condition payments on realized user feedback. Our main result shows that data-driven dynamic team mechanisms achieve periodic ex-post implementation of the efficient allocation rule. We then develop variants that guarantee participation and deliver budget surplus, and provide conditions under which these properties can be jointly attained.

Discussion Paper
Abstract

As AI systems shift from directing users to content toward consuming it directly, publishers need a new revenue model: charging AI crawlers for content access. This model, called pay-per-crawl, must solve a problem of mechanism selection at scale: content is too heterogeneous for a fixed pricing framework. Different sub-types warrant not only different price levels but different pricing rules based on different unstructured features, and there are too many to enumerate or design by hand. We propose the LM Tree, an adaptive pricing agent that grows a segmentation tree over the content library, using LLMs to discover what distinguishes high-value from low-value items and apply those attributes at scale, from binary purchase feedback alone. We evaluate the LM Tree on real content from a major German technology publisher, using 8,939 articles and 80,451 buyer queries with willingness-to-pay calibrated from actual AI crawler traffic. The LM Tree achieves a 65% revenue gain over a single static price and a 47% gain over two-category pricing, outperforming even the publisher’s own 8-segment editorial taxonomy by 40%—recovering content distinctions the publisher’s own categories miss.

Discussion Paper
Abstract

Bilateral bargaining under incomplete information provides a controlled testbed for evaluating large language model (LLM) agent capabilities. Bilateral trade demands individual rationality, strategic surplus maximization, and cooperation to realize gains from trade. We develop a structured bargaining environment in which LLMs negotiate via tool calls within an event-driven simulator, separating binding offers from natural-language messages to enable automated evaluation. The environment serves two purposes: as a benchmark for frontier models and as a training environment for open-weight models via reinforcement learning. In benchmark experiments, a round-robin tournament among five frontier models (15,000 negotiations) reveals that effective strategies implement price discrimination through sequential offers. Aggressive anchoring, calibrated concession, and temporal patience are associated with both the highest surplus share and the highest deal rate. Accommodating strategies that concede quickly disable price discrimination in the buyer role, yielding the lowest surplus capture and deal completion. Strategically competent models scale their behavior proportionally to item value, maintaining consistent performance across price tiers; weaker models perform well only when wide zones of possible agreement compensate for suboptimal strategies. In training experiments, we fine-tune Qwen3 (8B, 14B) via supervised fine-tuning (SFT) followed by Group Relative Policy Optimization (GRPO) against a fixed frontier opponent. The two stages optimize competing objectives: SFT approximately doubles surplus share but reduces deal rates, while RL recovers deal rates but erodes surplus gains—a tension traceable to the reward structure. SFT also compresses surplus variation across price tiers, and this compression generalizes to opponents unseen during training, suggesting that behavioral cloning instills proportional strategies rather than memorized price points.

Discussion Paper
Abstract

A soft-floor auction asks bidders to accept an opening price to participate in a second-price auction. If no bidder accepts, lower bids are considered using first-price rules. Soft floors are common despite being irrelevant with standard assumptions. When bidders regret losing, soft-floor auctions are more efficient and profitable than standard optimal auctions. Revenue increases as bidders are inclined to accept the opening price to compete in a regret-free second-price auction. Efficiency improves because a soft floor allows for a lower hard reserve, reducing the frequency of no sale. Theory and experiment confirm these motivations from practice.

Discussion Paper
Abstract

This paper proposes a semi-endogenous growth theory that incorporates technology vintages and the endogenous evolution of multiple technological paradigms through innovation. It provides a characterization of both balanced growth equilibrium and transitional dynamics in an environment where new technologies continuously emerge. From a positive perspective, the model rationalizes two distinct empirical patterns. Using two centuries of US patent data, I first document that the age profile of patents has a pronounced hump shape: most contemporary patents build upon technologies that are between 50 and 100 years old. Second, this age profile has remained stable throughout the past century. From a normative standpoint, the theory underscores a misallocation of research effort induced by the tendency among profit-maximizing firms to overinvest in further developing mature technologies. This yields a suboptimally slow development of emerging technologies. According to a calibrated version of the model, correcting such misallocation could generate welfare gains of 7%.

Discussion Paper
Abstract

Here we provide our solutions to the First Proof questions. We also discuss the best responses from publicly available AI systems that we were able to obtain in our experiments prior to the release of the problems on February 5, 2025. We hope this discussion will help readers with the relevant domain expertise to assess such responses.

Discussion Paper
Abstract

I estimate the effect of trade on local labor market concentration and its implications for wages using employer-employee linked data and tariff shocks from Brazil’s trade liberalization. Trade increased concentration by 7%, an effect driven by firm exit and worker flows to surviving import-competing firms. Increased concentration reduced wage take-home shares—estimated at 50 cents on the dollar pre-shock—enough to offset small wage gains from reallocation, but did not meaningfully reduce wages on net. Most of the wage declines attributed to Brazil’s trade liberalization resulted instead from reductions in the marginal revenue product of labor. Incorporating informality reveals substantial regional heterogeneity.

Discussion Paper
Abstract

We develop a new approach to estimating earnings, job, and employment dynamics using subjective expectations data from the NY Fed Survey of Consumer Expectations. These data provide beliefs about future earnings offers and acceptance probabilities, offering direct information on counterfactual outcomes and enabling identification under weaker assumptions. Our framework avoids biases from selection and unobserved heterogeneity that affect models using realized outcomes. First-step fixed-effects regressions identify risk, persistence, and transition effects; second-step GMM recovers the covariance structure of unobserved heterogeneities such as ability, mobility, and match quality. We find lower risk and persistence of the individual productivity component than in prior work, but greater heterogeneity in ability and match quality. Simulations show that reduced-form estimates overstate persistence and volatility on individual-level productivity due to job transitions and sorting. After accounting for heterogeneity, volatility declines and becomes flat across the earnings distribution. These results underscore the value of expectations data.

cowles-foundation-1954

History

In 1932, Alfred Cowles founded the Cowles Commission for Research in Economics in Colorado Springs. The Commission moved to Chicago in 1939, and finally to the Yale Department of Economics in 1954, where it was renamed the Cowles Foundation for Research in Economics.

Our Research Programs

Algorithms, Data, and Market Design

Econometrics

Economic Theory

Industrial Organization

Journal Publications