What happens when people begin using AI for work?
New research on employee skill and performance uncovers a paradox for using AI.
What happens when an employee who does cognitive work, like writing or coding, begins to use AI? Should we expect to see more productivity and better quality work?
Two new Cowles Foundation Discussion Papers from Yale's Nisheeth K. Vishnoi and collaborators examine these questions using economic models to understand the impact of individuals' choices to use AI at work.
To understand what happens to a person’s performance when they begin to use AI, Vishnoi and Lingxiao Huang of Nanjing University examined the incentives to use AI.
The key to the question depends on something unexpected and varied throughout the workforce: a person’s skills prior to their use of AI.
If a person’s skills in a certain area start out at a low level, they are rationally incentivized to use AI. But skills improve through practice and erode without it. So the more a person relies on AI, the further their skills degrade, which induces the need for more AI use, and the degradation cycle repeats.
A "high-skill" person stays capable and independent, while a "low-skill" person becomes persistently dependent on AI. As with any technology, advancements in AI evolve the behaviors of the users. Vishnoi’s work shows that as AI models improve, the gap between high and low skill workers is exacerbated.
Most importantly, the authors find that the skill degradation from AI use actually lowers people’s overall performance compared with the no-AI scenario.
Improving AI capability can amplify short-term gains while inducing persistent long-run losses in both human skill and task performance, even under fully rational, performance-driven behavior.
But for which tasks do people rationally choose the short-term productivity boost of AI?
A paper that Vishnoi also co-authored with Huang, as well as Wenyang Xiao, models the decisions by workers for which tasks to use AI assistance and how carefully to monitor the output. In their paper, workers choose between manual work, AI assistance with verification, or complete delegation to AI with no verification.
In the authors' model, people choose between these three options based on their own incentives for efficiency of completing their work, balanced with the quality of the output. The assumption is that their employer cares only about the work produced, not how the sausage is made.
The authors note that that workers lean on AI more for hard or uncertain tasks, and less when things are easy or certain.
Their model finds that employers benefit from having employees who are good at evaluating AI outputs: their quality improves, and they may even meet standards they couldn't reach before.
Workers who are less skilled at verification, however, might hand off more to AI than they should and will inadvertently create lower quality work. The employer ends up worse off as a result.
More broadly, AI changes not just how well people work, but what it means to be a good worker, making the ability to oversee and evaluate AI outputs a central determinant of quality, alongside underlying task skill.
Taken together, the two papers show how AI can create systems where individually rational behavior leads to worse outcomes collectively.