Font size

Why the Jobs Market Isn’t Working and What AI Has Revealed

share January 30, 2026Posted by: Jenna

Katrina being interviewed at Davos

Why the Jobs Market Isn’t Working and What AI Has Revealed

At this year’s World Economic Forum in Davos, global leaders, including our Group CEO Katrina Leslie, gathered to discuss economic stability, productivity, and the future of work. One theme came through clearly: labour markets across many countries are under growing strain.

Application volumes continue to rise, yet outcomes for both jobseekers and employers are getting worse. People are applying for more roles, hearing back less, and struggling to secure work that fits their skills and circumstances. Employers, meanwhile, are processing higher volumes of applications but still reporting poor matching, lower retention, and persistent vacancies.

Against this backdrop, artificial intelligence is playing an increasingly influential role in recruitment. But rather than fixing the problem, its current use is often exposing deeper structural issues.

Speaking in Davos Katrina argued that the hiring system itself is no longer fit for purpose and that AI is highlighting, not solving, its limitations.

A system under strain

For decades, hiring has followed the same basic model: employers post vacancies, jobseekers search and apply, and selection decisions are made further down the line. As this process moved online, speed and scale increased but the underlying structure remained unchanged.

The result is a system that relies on people applying repeatedly, often with little feedback, while employers face growing administrative burden and inconsistent results. AI has now been layered on top of this model, most commonly to manage volume by screening applicants out at the earliest stages.

According to recent data referenced in Davos discussions, around 80% of employers now use some form of AI in the initial stages of hiring. In many cases, this technology is used to filter or reject candidates before a human is involved.

The challenge, Katrina argued, is that many of these models have not been proven to be effective and are trained on public labour-market data that is already known to contain bias. When these systems are deployed at scale, they risk reinforcing existing inequalities and excluding capable candidates before their potential is ever seen.

Why oversight later isn’t enough

A common response to concerns about AI in hiring is the promise of human oversight further down the process. But this reassurance often misses the point.

Once a candidate has been screened out early by an automated system, that opportunity is effectively gone. Human review at a later stage cannot recover candidates who were never allowed through the first filter. In this way, early-stage automation can have a decisive and irreversible impact on who gains access to work.

This is one reason why regulators are paying closer attention. Under the EU AI Act, the use of AI to screen or rank individuals for employment is classified as a high-risk application. In the United States, class action lawsuits — including one involving Workday have highlighted the scale at which automated decision-making can affect jobseekers.

The issue is not simply about technology, but about power, transparency, and accountability in systems that determine access to work.

Turning the model around

Katrina’s argument is that the problem does not start with AI it starts with the hiring model itself. Rather than relying on people to search, apply, and compete at scale, she suggests turning the model upside down. Instead of people chasing jobs, all available jobs in the market should be matched to the individual.

In this approach, AI is not used to reject people automatically, but to help jobseekers navigate opportunity. Individuals can interact with the system directly, adjust what matters to them, and refine matches in real time. As they do so, the quality of matching improves and the need for mass screening falls away.

This shift has practical consequences. When people are shown roles that genuinely fit them, they apply for fewer jobs but with a higher chance of success. Employers, in turn, receive applications that are better aligned to the role, reducing friction on both sides.

A wider question for policymakers and employers

As AI becomes more embedded in recruitment, the choices being made now will shape who gains access to work in the years ahead. Used carefully, technology can widen access, reduce unnecessary barriers, and improve participation in labour markets. Used poorly, it can automate exclusion and reinforce the very problems it was meant to solve.

It is a question for employers, policymakers, and technologists alike: how to design hiring systems that work for people, not just processes. As labour markets continue to evolve, the focus will increasingly need to shift from managing volume to improving matching, transparency, and fairness.

share January 30, 2026Posted by: Jenna

Top