Skip to main content

Publications

Date Range
Discussion Paper
Abstract

This paper examines the impact of early childcare on academic achievement for children in grade 5 and grade 9, based on a 2003 policy expansion that created quasi-random variation in slot availability for children aged 1–2. Starting childcare one year earlier increases math scores by 9.7% of a standard deviation (SD) in grade 9. Children whose mothers do not hold a high school diploma benefit by a significant 28% of a SD at grade 9, reducing the math achievement gap from children of higher-educated mothers by about one third. We also present evidence of strong improvements for children of immigrants.

Discussion Paper
Abstract

We study the design of efficient dynamic recommendation systems, such as AI shopping assistants, in which a platform interacts with a user over multiple rounds to identify the most suitable product among those offered by advertisers. Advertisers have multi-dimensional private information: their private value from a purchase and private information about the user’s preferences. In each round, the platform displays recommendations; the user learns product characteristics of the shown items and then chooses whether to purchase, exit without purchasing, or submit a new query. These actions generate a stream of feedback—purchase, exit, and follow-up queries—that is informative about the user’s preferences and can be used both to refine future recommendations and to design contingent transfers. We introduce a class of data-driven dynamic team mechanisms that condition payments on realized user feedback. Our main result shows that data-driven dynamic team mechanisms achieve periodic ex-post implementation of the efficient allocation rule. We then develop variants that guarantee participation and deliver budget surplus, and provide conditions under which these properties can be jointly attained.

Discussion Paper
Abstract

Bilateral bargaining under incomplete information provides a controlled testbed for evaluating large language model (LLM) agent capabilities. Bilateral trade demands individual rationality, strategic surplus maximization, and cooperation to realize gains from trade. We develop a structured bargaining environment in which LLMs negotiate via tool calls within an event-driven simulator, separating binding offers from natural-language messages to enable automated evaluation. The environment serves two purposes: as a benchmark for frontier models and as a training environment for open-weight models via reinforcement learning. In benchmark experiments, a round-robin tournament among five frontier models (15,000 negotiations) reveals that effective strategies implement price discrimination through sequential offers. Aggressive anchoring, calibrated concession, and temporal patience are associated with both the highest surplus share and the highest deal rate. Accommodating strategies that concede quickly disable price discrimination in the buyer role, yielding the lowest surplus capture and deal completion. Strategically competent models scale their behavior proportionally to item value, maintaining consistent performance across price tiers; weaker models perform well only when wide zones of possible agreement compensate for suboptimal strategies. In training experiments, we fine-tune Qwen3 (8B, 14B) via supervised fine-tuning (SFT) followed by Group Relative Policy Optimization (GRPO) against a fixed frontier opponent. The two stages optimize competing objectives: SFT approximately doubles surplus share but reduces deal rates, while RL recovers deal rates but erodes surplus gains—a tension traceable to the reward structure. SFT also compresses surplus variation across price tiers, and this compression generalizes to opponents unseen during training, suggesting that behavioral cloning instills proportional strategies rather than memorized price points.

Discussion Paper
Abstract

As AI systems shift from directing users to content toward consuming it directly, publishers need a new revenue model: charging AI crawlers for content access. This model, called pay-per-crawl, must solve a problem of mechanism selection at scale: content is too heterogeneous for a fixed pricing framework. Different sub-types warrant not only different price levels but different pricing rules based on different unstructured features, and there are too many to enumerate or design by hand. We propose the LM Tree, an adaptive pricing agent that grows a segmentation tree over the content library, using LLMs to discover what distinguishes high-value from low-value items and apply those attributes at scale, from binary purchase feedback alone. We evaluate the LM Tree on real content from a major German technology publisher, using 8,939 articles and 80,451 buyer queries with willingness-to-pay calibrated from actual AI crawler traffic. The LM Tree achieves a 65% revenue gain over a single static price and a 47% gain over two-category pricing, outperforming even the publisher’s own 8-segment editorial taxonomy by 40%—recovering content distinctions the publisher’s own categories miss.

Discussion Paper
Abstract

A soft-floor auction asks bidders to accept an opening price to participate in a second-price auction. If no bidder accepts, lower bids are considered using first-price rules. Soft floors are common despite being irrelevant with standard assumptions. When bidders regret losing, soft-floor auctions are more efficient and profitable than standard optimal auctions. Revenue increases as bidders are inclined to accept the opening price to compete in a regret-free second-price auction. Efficiency improves because a soft floor allows for a lower hard reserve, reducing the frequency of no sale. Theory and experiment confirm these motivations from practice.

Faculty:  Learn how to share your research with the Cowles community at the links below.