Jean-Paul Carvalho is a professor of political economy in the Department of Economics at the University of Oxford and director of Oxford Elevate, the department’s executive education portfolio. In this episode of the Inside the Strategy Room podcast, he speaks with McKinsey Partner Robin Nuttall about what makes AI different from past waves of technology and shares the latest data on its implications for the nature of work, organizations, and business leadership.
The following transcript of their conversation has been edited for clarity and length. For more discussions on the strategy issues that matter, follow the series on your preferred podcast platform.
Robin Nuttall: How has your own interest in AI developed as the technology has advanced, and how did the topic become a part of your programs at Oxford?
Jean-Paul Carvalho: I’m naturally interested in AI because technology is the main engine of economic change. This began in the 1700s, with the Industrial Revolution bringing the ability to manufacture goods cheaply and at scale. That revolution led to cars, suburbs, advertising, consumerism, and much of what we associate with modern life today. With that, most of us have lived through an unusually stable environment.
That’s changing now, and it’s taken a while for people to come to grips with it. In April 2023, Pew [Research Center] did a survey1 that showed that while 62 percent of respondents believed AI would majorly disrupt the workforce, only 28 percent believed their own job would be affected. So there has been this kind of trepidation with AI; we were forced to pay attention to AI in November 2022 when ChatGPT was released, but some heads remained in the sand.
If we go back further, in 2012, the transition from symbolic AI to the deep neural nets we see today was starting to happen: Driverless cars were being trialed on the roads, AlexNet was out, and software engineers were starting to train AI agents to code. AI-based online job vacancies skyrocketed around 2015. By 2018, I was incorporating AI into my graduate course in political economy. Now, in our executive education program, Oxford Elevate, the leaders we interact with all want to know about AI—it’s front and center for them.
Robin Nuttall: Do you think the AI revolution is novel, or just another milestone in the long history of automation?
Jean-Paul Carvalho: I think there is a combination of novel factors that could mean the consequences of the AI revolution are far more wide reaching and profound. First, AI automates cognitive tasks, not physical tasks. Cognitive tasks are what humans are good at; our cognitive skills set us apart from everything else on the planet—we’re unique in that we can build knowledge, generation after generation. With AI, that’s potentially not the case anymore.
Second, AI is a general-purpose technology, whereas the Industrial Revolution automated very specific tasks. Third, AI is globally scalable. Tech companies can be concentrated in very small areas, like Silicon Valley, and still service the whole world. Finally, AI will generate new knowledge and new capabilities that are both offensive and defensive, with national security implications of the magnitude of nuclear energy.
This combination of factors makes AI unique and its implications quite different from previous technological revolutions.
Robin Nuttall: We can think about the advancement of AI in waves: first, the emergence of machine learning and predictive technologies; then generative AI; and most recently, agentic AI. Across this development, how is AI impacting labor markets?
Jean-Paul Carvalho: It’s clear that AI systems can improve productivity at an individual level. Erik Brynjolfsson2 and coauthors looked at a staggered rollout of AI voice chat assistance among customer support agents at a Fortune 500 firm. They found that customer support agents with access to this AI technology resolved about 15 percent more cases per hour. We see similar productivity improvements when it comes to coding and software development.
With AI agents, the technology becomes increasingly substitutable for human labor; there’s already evidence of full substitutability in certain tasks. Brian Jabarian and Luca Henkel conducted a field experiment3 of 70,000 job applicants who were randomly assigned to an AI voice recruiter or a human recruiter, with a third subset given a choice. The AI recruiter did better than the human recruiters in terms of job offers, job starts, and 30-day retention.
Again, while there’s a high level of substitutability, it’s at that individual or task level. At the firm or industry level, it’s more complicated.
There are several competing effects that will determine the overall aggregate effect on employment and wages. How substitutable is the technology for human labor? How much of a productivity boost will firms receive? If they’re more productive, they produce more and will hire more human labor, even if there’s some substitution—that’s the productivity effect. Then, how quickly will new tasks be generated that employ displaced human labor, where humans outcompete AI? It’s too early to talk about the latter, but current evidence suggests that the substitution and productivity effects roughly offset each other.
Daron Acemoglu and coauthors have looked at the aggregate effects of AI adoption in the US and found little change at the occupation or industry level in employment or wages. They do find a reduction in non-AI-related hiring, but that’s offset by other types of hiring. In Denmark, [Anders] Humlum and [Emilie] Vestergaard4 looked at changes in firms with widespread adoption of gen AI. They found very little change in hours worked and earnings but noted changes in occupational switching and organizational restructuring. That’s really what’s going on: a big shift within organizations.
You can see it with Gustavo de Souza’s work on the adoption of industrial software in Brazil—software that uses real-time sensor data to predict machine failures, optimize maintenance, and help workers use the machinery. These jobs were formerly done by white-collar workers in factory offices. After the adoption of this software, there is a reduction in hiring these workers, but there is actually an increase in hiring manual workers because the machines can be operated more continuously and efficiently.
Subscribe to the Inside the Strategy Room podcast
Robin Nuttall: We’ve seen scenarios where these dynamics have the consequence of compressing wages among white-collar workers, impacting spending and, therefore, growth. Do you think these scenarios are credible? And what are the implications for the talent pipeline?
Jean-Paul Carvalho: Of course, wages go down because of the composition effect, but the results are really mixed. Where we have clear results is in junior versus senior hiring. Again, Erik Brynjolfsson and coauthors examined US workers aged 22 to 25 and found a 16 percent reduction in employment in AI-exposed occupations. That’s a large drop and is especially prevalent in areas and occupations where AI substitutes for, rather than complements, human labor. It’s not actually about firing; it’s about a reduction in the hiring of junior workers.
This creates a big challenge in how we educate and train workers. How do you develop senior employees whose role is to oversee the decisions and work of AI agents when a limited number of new employees are being hired? You’re breaking the pipeline. Firms will have to hire and train people despite short-term incentives not to, if they are to get the right senior management to oversee the AI agents working within their corporations in the future. That’s not something managers and CEOs have had to think about to date, but this is a clear pipeline challenge they will now have to grapple with. What I do see is that humans are going to be in the loop for the foreseeable future.
CEOs of established companies will need to be almost as nimble and innovative as start-ups.
Robin Nuttall: What other key disruptions or implications should CEOs and business leaders be thinking about?
Jean-Paul Carvalho: There will be massive disruption, I think. There are new issues of trust in organizations that haven’t arisen before. I was speaking to a tech founder whose new employees are being sent deepfake videos of him asking them to click on links for training, but these links are not training links, as you can imagine. Or you have a divisional manager with a lot of AI agents—where before they were managing a thousand employees, and now it’s a million. How do they cope with that? You don’t want to wake up and find that your AI agents have signed a million pages of contracts that you now have to unwind. You need very trusted systems to deal with the new technology.
Then there’s market disruption. Market power can disappear more quickly because AI produces cheaper substitute goods and services that outcompete you. We’re seeing this with enterprise software, where the market is really thinking through which firms will be outcompeted by substitutes generated by AI. Even where substitute goods and services aren’t generated by AI systems, [the technology] can still enable disruption within the value chain itself. For example, new entrants can scale much more quickly than in the past because they don’t face the same labor constraints. CEOs of established companies will need to be almost as nimble and innovative as start-ups—and willing to radically restructure.
There’s some evidence that early trials of gen AI in corporations have mostly failed. I don’t think it’s a matter of the technology; it’s a matter of strategy and execution. About 50 percent of AI budgets go to sales and marketing, but the true value creation comes from restructuring, changing workflows and processes, and slimming down the bureaucracy. This is where enormous value can be created, and hedge funds today are trying to pinpoint the companies and start-ups best placed to do so.
Robin Nuttall: Would you say the ability to scale AI as a large incumbent enterprise will be a new source of competitive advantage?
Jean-Paul Carvalho: I think that’s one area where there will be value creation. But it’s the trillion-dollar question: Where will the profit lie in the AI value chain? If we go back to the earlier dot-com boom and the internet revolution, where was the value created? People initially thought it was a pick-and-shovel play—you wanted to own the servers and routers. Companies like Sun Microsystems’ stock price went up 6,400 percent from the summer of 1994 to early 2000, and then it crashed by 90 to 95 percent. What emerged from the ashes of the dot-com crash were the companies that have shaped our world today: big platform players that build market power through network externalities and various lock-in effects.
It was thought that AI labs would enjoy such market power—that they would have a large moat because it required such large capex and various algorithmic “secret sauces” to train these models. However, that was somewhat blown up in late 2024 with the release of DeepSeek’s LLMs [large language models]—V3 in December 2024 and R1 in January 2025—which developed frontier capabilities on a shoestring budget. As we progress toward AGI [artificial general intelligence], it could be that massive capex and algorithmic secrets are again required to train these highly advanced models so the moat can reappear.
It could be that the platforms really benefit from this AI revolution, as they did from the internet revolution. Existing digital ecosystems could be supercharged by AI, or new platforms could disrupt the existing digital ecosystems—we don’t know yet.
What is clear is that significant efficiency gains will be made by scaling AI within companies. Some companies may be better placed to do so than others; some may be more willing. It really depends on management and how they make this transition, but that is going to be a huge source of value creation.
Robin Nuttall: What do you see as the core elements of the “playbook” for organizations to make that transition and scale AI successfully?
Jean-Paul Carvalho: It’s going to depend on individual companies and industries. In healthcare, there’s a massive amount of back-office work, but it depends on the context—for example, the US is very different from the UK. The US healthcare system is extremely complicated, and new, very nimble healthcare providers are well placed to simplify the whole process. And that’s going to cut costs significantly. In education, a large percentage of increases in the wage bill is due to administration. Finding ways to eliminate the inefficiencies that crop up in large organizations will be key. What is the silver bullet? It’s a difficult question to answer in general. It depends on the specific industry.
Robin Nuttall: A choice business leaders face right now is automation versus augmentation. Do you think labor augmentation is a false hope, and that automation is just too alluring for organizations?
Jean-Paul Carvalho: The current path of technological development around AI is about substituting for human labor. If you look at the release of any AI model, the benchmark is beating 99 percent of humans in a particular task. It’s not about making humans 99 percent faster at the task, or making humans plus AI 20 percent better at the task. There was an alternative approach developed by Norbert Wiener, who posed the question: How do we develop technology that best complements human productivity? I think this is a very important point. Not only do you get a much more humane type of organization, but you also solve some of the problems of training humans to be in the loop at a senior stage.
Moreover, at a much earlier stage, at the level of education, problems start to incubate. Think about the incentives for investment in skills. Fifteen years ago, if I were learning how to code, I would have had a 99 percent chance of beating a computer system. Today, I would have very little chance—so what is my incentive to invest in those skills? What happens if others stop investing in them? Those skills can then affect the accumulation of other skills across the economy that are required for various tasks.
And how do I learn from people? Where do I get my role models from? A lot of our knowledge is tacit and gained in person. How do I acquire skills of punctuality, diligence, and grit? At this earlier stage of human skill formation and productivity, the human complementary approach to AI—human-oriented AI—is going to be hugely important.
Robin Nuttall: As you look at the development of AI, are you fundamentally optimistic or pessimistic?
Jean-Paul Carvalho: I see different optimistic and pessimistic scenarios. We have to ask ourselves: Why are we doing this? As a society, why does this make sense? The optimistic scenario is that AI frees humans from mundane work and unlocks human creativity, allowing us to devote our time and energy to things we really value. That could be work, but it could also be many other things. That’s what John Maynard Keynes thought would happen, and it’s what a lot of people aspire to.
It is possible, but it depends on how political institutions adapt to the AI revolution. If political institutions remain inclusive and egalitarian norms remain in place, then the productivity boom that could be generated from AI could be channeled into a state in which individuals have that freedom. But if political institutions don’t adapt in an egalitarian way, then it could go in a different direction. The optimistic thing is that there is a positive path forward, and which way it goes is really up to us.

