Copy article

Generative AI is boosting productivity, but the distribution story looks ugly

ended 12. March 2026

The productivity case for generative AI keeps being sold as a clean win: faster work, better outputs, happier customers.

New research is starting to put numbers on the benefits, but it also points to a more politically dangerous outcome. Gains may come with wage compression, hollowed-out entry roles, and a widening gap between people who can steer the tools and people who are managed by them.

That is the accountability gap. "Efficiency" is not a neutral word when the upside is captured by owners and the downside is carried by workers, trainees, and consumers who get a thinner service. The most likely backlash will not be anti-technology. It will be anti-management: suspicion that AI is being used to cut cost while calling it progress.

If organisations want AI adoption without a trust collapse, they need to show where the time savings go. Into lower prices? Into better service? Into training and job redesign? Or simply into fewer people doing more work?

A credible adoption plan should include job redesign, minimum human review points, and a clear rule for when speed is allowed to override judgement.

We'd like your views:

  • What metrics would prove AI is improving outcomes, not just squeezing labour?
  • Should large employers publish an "AI impact statement" covering pay, workload, and role redesign?
  • Which roles are most at risk of being hollowed out: junior analysts, customer service, or back office ops?
  • How should policymakers respond if productivity rises while wages stagnate?
  • What would a fair share-the-gains model look like in practice?

1 responses from the Newspage community

Copy all

Star Quote
Copy

AI productivity is real, but the distribution story is where it gets toxic. If AI saves 20% of a team's time and the only visible outcome is headcount cuts and flatter pay, people will treat it as a wage-suppression machine, not progress.

The metrics that matter are the ones management usually avoids: error rates, complaint rates, escalation to human, rework, and time to decision with accountability attached. Then add labour metrics: junior hiring volumes, pay progression, training hours, and whether roles are being redesigned or just compressed.

In our AI audits, the biggest risk is silent deskilling. Tools make work faster, but they also strip out the apprenticeship layer where judgement is built. If juniors stop doing the first draft, who learns to spot the second-order risk? An AI impact statement is a sensible discipline if it forces leaders to say where the gains go: lower prices, better service, more training, or just more margin.