CFDP 1726R2

Incentives for Experimenting Agents

Author(s): 

Publication Date: September 2009

Revision Date: March 2013

Pages: 151

Abstract: 

We examine a repeated interaction between an agent, who undertakes experiments, and a principal who provides the requisite funding for these experiments. The repeated interaction gives rise to a dynamic agency cost — the more lucrative is the agent’s stream of future rents following a failure, the more costly are current incentives for the agent, giving the principal an incentive to reduce the continuation value of the project. We characterize the set of recursive Markov equilibria. We also find that there are non-Markov equilibria that make the principal better off than the recursive Markov equilibrium, and that may make both agents better off. Efficient equilibria front-load the agent’s effort, inducing as much experimentation as possible over an initial period, until making a switch to the worst possible continuation equilibrium. The initial phase concentrates the agent’s effort near the beginning of the project, where it is most valuable, while the eventual switch to the worst continuation equilibrium attenuates the dynamic agency cost.

Keywords: 

Experimentation, Learning, Agency, Dynamic agency, Venture Capital, Repeated principal-agent problem

JEL Classification Codes:  D8, L2