17 minute read

Author’s Note

One of the abilities I am keen to explore with AI is how it can help me to write. This article was co-authored with Claude 4.5 Opus - I will write an article on my experiences when I have some more experience of working with it. Interestingly Claude was more optimistic on this topic than I was - it actually made me less worried.

Executive Summary

I am seriously concerned that AI use will cause unemployment that may spiral into economic issues for the UK. The central argument of this article is simple: what matters is not whether AI transforms the economy, but how quickly. Fast enough to outrun institutional adaptation, and we get disruption and suffering. Slow enough to allow adjustment, and we get managed transition.

I describe a number of constraints on fast adoption that suggest we have time—not unlimited, but enough to act deliberately. This is grounds for urgency, not complacency. For the UK, the path forward is not frontier AI development but excellence in AI application—professional services, financial services, creative industries. For accountants, it means using our policy influence and client relationships to shape a humane transition.

The question is not whether AI will transform the economy. It will. The question is whether we act while the window remains open. It will not stay open indefinitely.

Article

Something is happening in the graduate labour market. In 2024, 1.2 million UK graduates competed for just 17,000 entry-level positions—the highest application-to-vacancy ratio since records began in 1991 (Fortune, 2025). Youth unemployment for 16-24 year olds has risen from 10.9% in 2022 to 14.3% in 2025 (McKinsey, 2025). Graduate job advertisements in banking and finance are down 75% compared with 2019; software development postings have fallen 65%, and accounting roles by 54% (Financial Times, 2025).

Economists debate the causes. Some point to AI automation; others emphasise post-pandemic hiring corrections, interest rate rises, employer National Insurance increases, and general economic uncertainty. The decline spans sectors with both high and low AI exposure, suggesting no single explanation suffices (Intergenerational Foundation, 2025). But one pattern is clear: employers are pausing, reassessing, and in many cases deciding they need fewer entry-level staff—whether because AI handles tasks directly, because anticipation of AI changes hiring calculus, or because economic conditions make caution rational.

The dominant framing—will AI create or destroy jobs?—obscures more important questions. What will AI actually do to our economy here in the UK? And critically: how quickly?

The Human Cost of Transition

Last weekend I was watching Tony Robbins (If you don’t know of Tony, he is personal coach to the elite, including several US presidents) being interviewed on the Diary of a CEO podcast (Bartlett, 2026). Robbins talked about the impact that AI would have on jobs, pointing out how for people—particularly in the USA—their identity is inextricably linked to their work. “Jobs are not just money,” he argued. “Jobs are meaning.”

His concern was not that technology would change the economy—that is inevitable—but the pace of change: “If this was 100 years to do it, we could adjust. There will be more jobs, there’ll be new jobs, there’ll be new time. But it’s the timeframe that I’m most concerned about… it creates suffering. It’s something that we can predict is going to create suffering. And yet I see very few people in positions of influence and power doing much about it.” He is not the only one with such views. At Davos this week, Jamie Dimon, CEO of JPMorgan, warned that sudden displacement could cause “civil unrest” and said he would accept government intervention “to save society.” Larry Fink’s (CEO of BlackRock) opening address was blunter: “If AI does to white-collar workers what globalization did to blue-collar workers, we need to confront that today directly. It is not about the future. The future is now” (Yahoo Finance, 2026).

The historical parallels are instructive—though perhaps not in the way usually assumed. The Luddites of 1811-1812 were skilled textile workers who attacked machinery not from ignorance but from a clear-eyed understanding that new technology would destroy their livelihoods (National Archives, 2022). The Swing Riots of 1830—when agricultural workers destroyed threshing machines across southern England—represented the largest movement of social unrest in nineteenth-century England (Hobsbawm and Rudé, 2001). Nine rioters were hanged and 450 transported to Australia.

The Luddites lost. The technology was adopted anyway. In the long run, agricultural and industrial productivity gains made society vastly wealthier. But here is the crucial point: the transition was unnecessarily brutal because no adequate social protection existed. The technology would have been adopted regardless; the question was whether the transition would be managed humanely. It was not. We should learn from that failure, not repeat it.

The Challenge with Economic Estimates

Economists are notorious for getting their estimates wrong—and their AI projections span an improbably wide range. Goldman Sachs (2023) projects AI will boost global GDP by 7% over the next decade. Daron Acemoglu, the MIT economist who shared the 2024 Nobel Prize in Economics, estimates total factor productivity gains of just 0.5% over the same period—roughly fourteen times smaller (Acemoglu, 2024).

Acemoglu’s scepticism deserves attention. His concern is not that AI will cause catastrophic displacement, but that it may deliver neither the productivity bonanza nor the job creation that optimists promise. He distinguishes between AI that genuinely augments human capability—enabling new tasks and creating demand for new skills—and AI that merely automates existing tasks at lower cost, what he calls “so-so technology.” The latter displaces workers without generating the productivity gains needed to fund new employment elsewhere.

“I think that hype is making us invest badly in terms of the technology,” Acemoglu warns. “The faster you go, and the more hype you have, that course correction becomes less likely” (MIT Technology Review, 2025).

Acemoglu may be right that aggregate effects will be modest. But aggregate modesty can mask severe sectoral concentration. If AI displaces 5% of the workforce but that 5% is concentrated in professional services and graduate-entry roles, the experience for those affected is not modest at all. The UK’s service sector accounts for four-fifths of economic output (House of Commons Library, 2025). We are, in economic terms, sitting in AI’s blast zone—highly exposed to automation of knowledge work.

We also have to look at where the gains go. A lot of the headline models are built around the US economy, which includes the AI companies themselves. “The ICT revolution saw the US as by far the biggest beneficiary in terms of productivity gains. The largest euro-zone economies, for example, saw little boost, while some other countries (including the UK, Canada and Australia) were in-between” (Capital Economics, 2024).

The UK’s Specific Position—And What To Do About It

I live in the UK, where our position in the AI economy is structurally weak in three ways.

First, we lack strategic autonomy in a critical technology. DeepMind, founded in London in 2010 and once Britain’s most promising AI company, was acquired by Google in 2014 for approximately $500 million (Shu, 2014). It is now Google DeepMind, headquartered in London but owned by Alphabet in California. When strategic decisions are made about AI development priorities, capability deployment, and safety trade-offs, they are made in Mountain View, not London.

Second, high-value AI development jobs concentrate elsewhere. DeepMind employs talented researchers in London, but the centre of gravity for frontier AI development is the San Francisco Bay Area. The highest-paying, highest-impact roles—and the wealth creation that accompanies them—cluster there. UK universities produce excellent AI researchers; many of them leave.

Third, we depend on foreign platforms for critical infrastructure. When a UK professional services firm pays for AI capabilities, it pays a US hyperscaler—Microsoft, Google, Amazon, or increasingly Anthropic and OpenAI. The productivity gain accrues locally, but the margin flows to Silicon Valley.

The UK government recognises these risks. The AI Action Plan explicitly frames the need for the UK to become an “AI Maker” rather than an “AI Taker” (DSIT, 2025). But aspiration is not achievement, and we must be realistic about what is possible.

The window for UK leadership in frontier AI development has largely closed. We lack the compute concentration, the energy infrastructure, and the capital aggregation to compete with US hyperscalers in building foundation models. Attempting to do so would be expensive and likely futile.

But frontier model development is not the only game. The UK can excel at applying AI to our service-sector strengths—professional services, financial services, creative industries, and the substantial expertise we retain in AI safety and governance. The value capture shifts from building the technology to deploying it intelligently. This is not a consolation prize; it is where most of the economic value will be created. The firms that thrive will not be those that build foundation models but those that integrate AI most effectively into complex, high-value workflows.

This strategic choice has implications. We must invest heavily in AI application skills—the people who can bridge domain expertise and AI capability. We must develop regulatory frameworks that attract AI deployment while protecting consumers. And we must accept that our economic future depends on technology we do not control, which creates genuine risks that policy should address.

The Two Scenarios—And Why Speed Matters

There are two scenarios about how this will play out:

The negative scenario: AI takes jobs faster than new employment emerges. Spending power collapses. Tax receipts fall. Benefits become unaffordable. Social cohesion fractures. As Acemoglu and Johnson (2023) observe, “wages are unlikely to rise when workers cannot push for their share of productivity growth.” The benefits of the Industrial Revolution only diffused broadly “after decades of social struggle and worker action.”

The positive scenario: AI increases productivity substantially. New job categories emerge—as they have in every previous technological transition. Forty years ago, the roles of data scientist, UX designer, social media manager, cloud architect, and SEO specialist did not exist; today they employ millions. The World Economic Forum’s Future of Jobs Report (2025) projects a net 78 million new roles globally by 2030. Taxes on AI-enhanced productivity fund retraining and income support during transition. We emerge wealthier, with work that is more meaningful and with less drudgery.

Both scenarios are plausible. The variable that determines which one we get is not whether AI transforms the economy—it will—but how quickly. Fast enough to outrun institutional adaptation, and we get disruption and suffering. Slow enough to allow workforce adjustment, retraining, and policy response, and we get managed transition.

Which of these is right? – I have no idea – we just need to watch, assess the balance, and react.

A Difficult But Manageable Transition

Several factors will slow AI adoption, buying time for adaptation—though not eliminating pain:

Legacy system integration. Most enterprises run on decades-old infrastructure designed before AI existed. The historical parallel is instructive: the dynamo existed in 1880, but factory productivity gains did not materialise until the 1920s because realising benefits required redesigning factories, not just swapping power sources (David, 1990). AI is similar—bolting it onto existing workflows captures a fraction of its potential. Full benefits require process redesign, which takes years.

Data quality. AI requires structured, clean, accessible data. Most organisations have fragmented, siloed data across non-communicating systems. The AI is ready; the data is not.

Infrastructure constraints. “High electricity prices, limited firm power capacity and slow grid connections are already deterring major investments in AI. Countries with cheap, reliable baseload power, particularly from hydro and nuclear, are emerging as global magnets for AI infrastructure” (Tony Blair Institute, 2025). France benefits from its nuclear fleet; Norway from hydroelectricity. The UK has neither advantage.

Liability gaps. Professional liability frameworks assume human judgment. AI creates unresolved questions about accountability, insurance, and regulatory compliance. In regulated sectors—including accounting—adoption will be cautious until these frameworks are established.

Skills shortages. The missing layer is people who can bridge domain expertise and AI capability—accountants who understand large language models, developers who understand accounting workflows. That population is small and in high demand.

These constraints suggest a transition measured in years to decades, not months. But—and this is crucial—the pain starts before full deployment. Survey evidence suggests employers are reducing graduate hiring in anticipation of AI-enabled productivity gains, even before AI handles the work directly (World Economic Forum, 2025; BSI, 2025). This anticipatory effect may explain why we see employment impacts already despite slow actual deployment—though disentangling this from broader economic caution is difficult.

This creates a painful gap: young people locked out of careers today, even as full automation remains years away. The window for policy response is still open—but it is narrowing.

What if I’m wrong about speed? If AI capability advances faster than expected, the need for policy action is even more urgent, not less. If AI optimists in the tech industry are right about pace, we have less time than I suggest—which makes immediate action essential.

Where New Roles Will Emerge

If history is any guide, AI will create demand in categories we cannot fully anticipate. Optimism is facilitated by having some idea of the direction of travel. If we look specifically at accounting, PwC’s 2026 AI Business Predictions describes an emerging “hourglass” workforce: talent concentrated at junior and senior levels, with a smaller mid-tier as AI handles routine “midlevel” work. Entry-level accountants should be learning to orchestrate AI, validate AI outputs, and handle exceptions—not competing with AI to do routine processing. Firms that invest in this redefined junior role will build the senior talent pipeline; those that simply stop hiring will face capability gaps in five to ten years (PwC, 2026).

This is indicative of the types of patterns already identified for new roles, including:

AI orchestration. Someone must configure, monitor, and refine AI systems. “Prompt engineering” barely existed two years ago; now it commands premium salaries. Every AI implementation requires human judgment about what to automate, how to handle exceptions, and when to override. This is skilled work, and demand is growing.

Authenticity and trust. As AI-generated content proliferates, provably human judgment becomes more valuable, not less. We already see “human-verified” certifications emerging in content creation. Clients will pay premiums for advice they know came from a human professional exercising genuine judgment—particularly in high-stakes decisions where accountability matters.

Complexity management. AI handles routine cases well; novel situations and edge cases still require human expertise. As routine work is automated, the remaining human work concentrates on genuinely difficult problems. Legal and accounting firms are already shifting toward advisory rather than compliance work—AI handles the filings, humans handle the strategy.

Care and human connection. Healthcare, education, social work, and counselling require human presence in ways that resist automation. Demand in these sectors is growing as populations age and mental health needs increase. Whether we fund them adequately is a political choice, not a technological constraint.

Physical-world services. Plumbers, electricians, carers, and tradespeople work in unstructured physical environments where AI-enabled robotics remains far from human capability. These roles may see rising relative wages as knowledge work faces downward pressure.

The challenge is not that new work will fail to emerge—it will—but that the transition may be deeply uneven. Young graduates trained for knowledge work may find themselves competing for shrinking positions, while demand grows in sectors they did not prepare for.

What Accountants Specifically Can Do

As accountants, we sit at the intersection of business, finance, and compliance. We have specific capabilities and specific obligations. Our professional bodies—ICAEW and ACCA—have genuine policy influence: ICAEW’s position on the Employment Rights Bill was reflected in the government dropping day-one unfair dismissal protection; its Business Confidence Monitor is cited in Parliament; its leadership accompanies ministers on trade delegations (ICAEW, 2025). This influence should be used.

At the policy level:

Advocate for transition funding mechanisms. AI-derived productivity gains are real, even if smaller than the hype suggests. Professional bodies should advocate for levies on AI-enhanced profits to fund retraining and income support—similar in principle to the apprenticeship levy, but targeted at AI transition. We understand tax policy; we should help design it.

Push for AI ethics in CPD requirements. Continuing professional development should include AI ethics, workforce transition, and responsible deployment—not just technical AI skills. Accountants advise on consequential decisions; we should understand the consequences.

Develop frameworks for responsible workforce transition. When firms reduce headcount citing AI-enabled efficiency, what does responsible transition look like? Professional guidance should exist—covering notice periods, retraining support, and reputational considerations.

Also, it is worth considering what might happen in the unlikely event that the economy really collapses – the expertise of accountants will be invaluable in supporting local economies to develop as interim platforms for commerce.

Conclusion: Speed Is the Variable

The central argument of this article is simple: what matters is not whether AI transforms the economy, but how quickly. Fast enough to outrun institutional adaptation, and we get disruption and suffering. Slow enough to allow adjustment, and we get managed transition.

The constraints I have described suggest we have time—not unlimited, but enough to act deliberately. This is grounds for urgency, not complacency. The Luddites remind us that technological transitions can cause immense suffering even when they ultimately make society wealthier. That suffering was not inevitable; it resulted from policy failure.

For the UK, the path forward is not frontier AI development but excellence in AI application—professional services, financial services, creative industries. For accountants, it means using our policy influence and client relationships to shape a humane transition.

The question is not whether AI will transform the economy. It will. The question is whether we act while the window remains open. It will not stay open indefinitely.


References

Acemoglu, D. (2024) ‘The Simple Macroeconomics of AI’, NBER Working Paper No. 32487. Cambridge, MA: National Bureau of Economic Research. Available at: https://www.nber.org/papers/w32487 (Accessed: 23 January 2026).

Acemoglu, D. and Johnson, S. (2023) Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity. New York: PublicAffairs.

Bartlett, S. (2026) ‘Tony Robbins: No One Is Ready For What’s Coming! Why The Next Decade Will Break People!’, The Diary of a CEO [Podcast]. 15 January. Available at: https://open.spotify.com/episode/6KeM3GzZuUsklc66u2l2YN (Accessed: 23 January 2026).

BSI (2025) Evolving Together: AI and the Workforce. London: British Standards Institution. Available at: https://www.bsigroup.com/ (Accessed: 23 January 2026).

Capital Economics (2024) AI, Economies and Markets: How Artificial Intelligence Will Transform the Global Economy. London: Capital Economics.

David, P.A. (1990) ‘The Dynamo and the Computer: An Historical Perspective on the Modern Productivity Paradox’, American Economic Review, 80(2), pp. 355-361.

DSIT (2025) AI Opportunities Action Plan. London: Department for Science, Innovation and Technology. Available at: https://www.gov.uk/government/publications/ai-opportunities-action-plan (Accessed: 23 January 2026).

Financial Times (2025) ‘Is AI Killing Graduate Jobs?’, Financial Times, 24 July. Available at: https://www.ft.com/content/99b6acb7-a079-4f57-a7bd-8317c1fbb728 (Accessed: 23 January 2026).

Fortune (2025) ‘The Gen Z Job Crisis Is Real: 1.2 Million Recent Grads in the UK Competed for Just 17,000 Open Roles’, Fortune, 28 October. Available at: https://fortune.com/2025/10/28/gen-z-job-crisis-real-1-2-million-graduates-17000-jobs-uk-ai-labor-market-colleges/ (Accessed: 23 January 2026).

Goldman Sachs (2023) The Potentially Large Effects of Artificial Intelligence on Economic Growth. New York: Goldman Sachs Global Investment Research. Available at: https://www.goldmansachs.com/intelligence/pages/generative-ai-could-raise-global-gdp-by-7-percent.html (Accessed: 23 January 2026).

Hobsbawm, E.J. and Rudé, G. (2001) Captain Swing. 2nd edn. London: Phoenix Press.

House of Commons Library (2025) ‘Components of GDP: Economic Indicators’. Available at: https://commonslibrary.parliament.uk/research-briefings/sn02787/ (Accessed: 25 January 2026).

ICAEW (2025) ‘Influencing Policy Throughout 2025’. Available at: https://www.icaew.com/insights/viewpoints-on-the-news/2025/dec-2025/influencing-policy-throughout-2025 (Accessed: 23 January 2026).

Intergenerational Foundation (2025) ‘Will AI Take Your Graduate Job?’. Available at: https://www.if.org.uk/2025/08/26/will-ai-take-your-graduate-job/ (Accessed: 23 January 2026).

McKinsey & Company (2025) ‘Not Yet Productive, Already Disruptive: AI’s Uneven Effects on UK Jobs and Talent’, McKinsey UK Insights, 14 July. Available at: https://www.mckinsey.com/uk/our-insights/the-mckinsey-uk-blog/ai-uneven-effects-on-uk-jobs-and-talent (Accessed: 23 January 2026).

MIT Technology Review (2025) ‘A Nobel Laureate on the Economics of Artificial Intelligence’, MIT Technology Review, 25 February. Available at: https://www.technologyreview.com/ (Accessed: 23 January 2026).

National Archives (2022) Why Did the Luddites Protest? Available at: https://www.nationalarchives.gov.uk/education/resources/why-did-the-luddites-protest/ (Accessed: 23 January 2026).

PwC (2026) 2026 AI Business Predictions. Available at: https://www.pwc.com/us/en/tech-effect/ai-analytics/ai-predictions.html (Accessed: 25 January 2026).

Shu, C. (2014) ‘Google Acquires Artificial Intelligence Startup DeepMind For More Than $500M’, TechCrunch, 26 January. Available at: https://techcrunch.com/2014/01/26/google-deepmind/ (Accessed: 23 January 2026).

Tony Blair Institute for Global Change (2025) Sovereignty, Security, Scale: A UK Strategy for AI Infrastructure. London: Tony Blair Institute. Available at: https://www.institute.global/ (Accessed: 23 January 2026).

World Economic Forum (2025) Future of Jobs Report 2025. Geneva: World Economic Forum. Available at: https://www.weforum.org/publications/the-future-of-jobs-report-2025/ (Accessed: 23 January 2026).

Yahoo Finance (2026) ‘At Davos, Fears About AI-Driven Job Loss Take Center Stage’, 23 January. Available at: https://finance.yahoo.com/news/at-davos-fears-about-ai-driven-job-loss-take-center-stage-124805401.html (Accessed: 25 January 2026).