Generative AI is boosting productivity, but the distribution story looks ugly
The productivity case for generative AI keeps being sold as a clean win: faster work, better outputs, happier customers.
New research is starting to put numbers on the benefits, but it also points to a more politically dangerous outcome. Gains may come with wage compression, hollowed-out entry roles, and a widening gap between people who can steer the tools and people who are managed by them.
That is the accountability gap. "Efficiency" is not a neutral word when the upside is captured by owners and the downside is carried by workers, trainees, and consumers who get a thinner service. The most likely backlash will not be anti-technology. It will be anti-management: suspicion that AI is being used to cut cost while calling it progress.
If organisations want AI adoption without a trust collapse, they need to show where the time savings go. Into lower prices? Into better service? Into training and job redesign? Or simply into fewer people doing more work?
A credible adoption plan should include job redesign, minimum human review points, and a clear rule for when speed is allowed to override judgement.
We'd like your views:
- What metrics would prove AI is improving outcomes, not just squeezing labour?
- Should large employers publish an "AI impact statement" covering pay, workload, and role redesign?
- Which roles are most at risk of being hollowed out: junior analysts, customer service, or back office ops?
- How should policymakers respond if productivity rises while wages stagnate?
- What would a fair share-the-gains model look like in practice?

