PolicyLens

Green - Labour market

Require human review of workplace AI

Require assessments, worker consultation and human review for AI hiring, monitoring and discipline.

Last updated: May 2026.

Read the policy-specific methodology note

Regulatory baseline

The central case moves beyond the UK white paper’s principles-led approach by creating workplace-specific duties and human review rights.

  • Public bodies must audit AI systems.
  • Employers face compliance duties.
  • Productivity delay is the main risk.

Core trade-offs

Workers gain protection from opaque automated decisions. Employers lose some speed and automation savings. If broad, the policy can slow useful AI adoption.

  • Workers gain contestability.
  • Employers face compliance costs.
  • AI productivity may slow.

Illustrative fiscal impact

+GBP 0.2bn to +GBP 4.0bn. Central estimate: +GBP 1.0bn.

  • Positive numbers mean public-finance pressure; negative numbers mean Exchequer savings.
  • Gross costs are separated from tax, NI and benefit offsets.
  • Private business costs are not automatically fiscal costs.
  • Behavioural responses widen the range materially.
  • This is not an official costing.

Economic impact by 2027-28

  • Jobs: May protect some workers, but slower AI adoption can preserve inefficient tasks.
  • Wages: Protects against unfair wage and discipline decisions, not a general pay rise.
  • Prices: Compliance costs may pass through in AI-intensive services.
  • GDP / productivity: Likely negative if rules delay low-risk AI productivity gains.

Assessment

The policy is easier to justify for hiring, discipline and surveillance than for all workplace AI. A broad human-review rule could protect workers while slowing productivity-enhancing adoption.

Confidence: Low. Compliance cost and lost-productivity channels are not officially costed.

Main risks

  • Overbreadth: Covering low-risk AI could delay useful productivity tools.
  • Regulatory capacity: Existing regulators may lack technical resources.
  • Box-ticking: Human review may become formal rather than meaningful.

Safeguards

  • Limit hard duties to high-risk uses.
  • Fund technical regulator capacity.
  • Require audit trails, not blanket bans.

Academic evidence

Acemoglu and Restrepo, Journal of Political Economy, 2020

Robots and Jobs: Evidence from US Labor Markets

Automation can displace tasks and workers even when it raises output in some firms.

Supports caution on AI rules that trade protection against productivity.

Robots and Jobs: Evidence from US Labor Markets (2020)

UK government evidence

Department for Science, Innovation and Technology, 2023

A pro-innovation approach to AI regulation

The UK AI white paper relies on principles and existing regulators rather than a single AI regulator.

Defines the baseline for stronger workplace AI law.

A pro-innovation approach to AI regulation (2023)

House of Commons Library, 2023

Artificial intelligence and employment law

Commons Library identifies employment-law issues around automated decision-making, transparency and contestability.

Supports worker-risk channels for AI protections.

Artificial intelligence and employment law (2023)

Sources

Other Green policies

PolicyLens estimates are illustrative and not official costings.