The McKinsey Podcast

Which tech trends are rising to the top of the business agenda?

| Podcast

Global competition has intensified, and the demand for energy and computing power is growing exponentially as companies embed AI, robotics, and other technologies into their everyday business processes. McKinsey’s annual outlook on technology trends points to 13 “frontier” trends that can help companies get a handle on these and other challenges. On this episode of The McKinsey Podcast, McKinsey Senior Partners Lareina Yee and Sven Smit and Partner Roger Roberts share the latest data and their insights on digital innovation, implementation, infrastructure, and trust with Global Editorial Director Lucia Rahilly.

The McKinsey Podcast is cohosted by Lucia Rahilly and Roberta Fusaro.

The following transcript has been edited for clarity and length.

What’s new on McKinsey.com

We’ve got a fresh article where seven McKinsey leaders share advice for CEOs facing unpredictable global trade rules. And speaking of CEOs—our new book A CEO of All Seasons drops October 7. It’s all about the four big stages of a CEO’s journey: stepping up, starting strong, staying ahead, and eventually passing the torch.

AI versus agentic AI

Lucia Rahilly: Lareina, let’s start with some basics. Could you differentiate for us agentic AI from the kinds of AI that have come before it? It’s been an evolution, and I’m not sure everyone understands the differences.

Lareina Yee: We’ve been on an AI journey for 50, 60, or more years. What many of us have already been using in business is analytical AI—that’s predictive. And over the past five or ten years, we’ve seen a complete bloom in machine learning.

Generative AI is moving from predictive to a probabilistic model. It’s different—based on a large language model, or LLM. And this year, people have spent a lot of time talking about agentic AI, which is another shift. That’s more about autonomous ability—to complete a task or take action.

So it could be something as simple as changing your password or as complex as working with humans across multiple steps.

One thing I think is really important: The biggest unlocks for businesses often come from using multiple technologies. They’re using machine learning. They’re using predictive analytics. They’re using probabilistic models and that intuitive UI where you type a question into a generative AI tool. They’re also using agents—say, for call centers—to handle level one and level two questions.

So we’re seeing all of these being used.

Want to subscribe to The McKinsey Podcast?

Lucia Rahilly: Help us bring that piece of it to life. Can you give an example of how you see AI put into practice at work and how it’s having impact?

Lareina Yee: In both the headlines and deeper research, agentic AI is on everyone’s mind. For example, you can use agents to rethink a research process by having them take on different tasks collectively over time. Or take loan processing: Stages like analytics, approval, and risk scoring could be handled by agents.

In call centers, instead of asking for directions to do something simple—like changing a password or linking a credit card—an agent could complete the task directly. That really opens up the aperture of how you can apply this in business, which is why so many companies are standing up agents and asking, “What if 10 percent of my team were digital coworkers? How could I reimagine the work, or at least drive more efficiency in how it gets done?”

The idea of a digital coworker isn’t science fiction—it’s something you can apply now.

Lareina Yee

The idea of a digital coworker isn’t science fiction—it’s something you can apply now. It’s not just software deployment; it’s bringing something into the workforce so it looks more like human capital deployment. Let me give you an example. If you have an agent or a team of agents, you need to do all the technical work to get it ready to enter the workforce. Then you need to onboard it, train it, give it feedback, and figure out how to teach it ways of working and norms. We’re just starting to scratch the surface of this, but it’s already changing how we think about the workforce.

Where AI value is emerging

Lucia Rahilly: Are you seeing anything interesting that illustrates the value creation thesis for agentic AI?

Lareina Yee: Absolutely. Some simpler examples: One company shared that they used it in their sales process. They still have the same number of sellers, but they saw an 11 percent increase in lead generation and conversion. That’s concrete. Another company on the retail side shared how they’re using it to advise on shopping decisions and personalization—helping complete the thought between what you may want to purchase, your purchasing history, and what’s available, even shipping it to you.

These are all early ideas, but they’re already helping with traditional business challenges and objectives.

Lucia Rahilly: How are leaders navigating adoption and ensuring that their employees have the skills necessary to benefit from AI at scale?

Lareina Yee: People are trying all kinds of things. One thing that’s incredibly humbling right now is that we’re all learning together.

One approach that seems to work is reimagining your business as AI-native. Give yourself the space to ask, “If I reimagine my business with these capabilities, how radically could it change? Where could I see value I couldn’t have imagined otherwise?” A separate decision is how much of that you implement.

I think the first thing is to look further out. The next step is to gain experience with the tools. Have I given my team access to the technology? Am I using it myself? Are we learning as we go, talking to peers, and sharing what’s working and what isn’t?

And then there are certain fundamentals of business that don’t go out of fashion. We need business cases. We need to prioritize areas that will make a difference, make sure they’re resourced, and apply the basics we know matter so we can see them through. It’s great to experiment, but at some point, you need to place a couple of bets and really invest.

Lucia Rahilly: What’s your take on how to get a data-informed, clear-eyed view of the risks and start managing those risks successfully?

Lareina Yee: We have to keep our eyes and ears open to what some of the unintended consequences might be and the risks we introduce by using the technology in new, and sometimes very intimate, ways with people because we’re working side by side. If you have a digital colleague entering a workspace, that’s very different from a back-end tool. Leaders need the courage to look at what could go wrong.

We can be quite pragmatic about this, not dystopian.

Lucia Rahilly: Any advice for leaders who are tasked with embarking on this transformation that is filled with so much uncertainty?

Lareina Yee: I think it’s a gift if you’re given that. As leaders, we have to ask more questions and listen more, rather than state and assert. We do not have great certainty, and that’s OK.

That becomes part of what we do—walking through the uncertainty and finding the value. If I don’t leave space for uncertainty and ask the questions, I might miss something. I don’t think it needs to be a bad thing; I think it can actually be a great thing.

We also have to remind ourselves that we decide how fast we go, because this is all about us as humans. If we don’t want technology to take 70 percent of the tasks, it doesn’t have to. The technology isn’t asking us to. It can be used however we want.

The next frontier in robotics

Lucia Rahilly: Why is robotics a top trend in the research this year?

Sven Smit: For many reasons. I personally play a little game called “spot the robot.” Many of us have seen cleaning robots, but few of us have been in a logistics center where robots climb to pick up items. Right now, some logistics centers are installing more robots than they are hiring people. So you see this tipping point.

We also see massive amounts of video clips of humanoid robots, which are coming, that will have their own version of full self-driving and self-walking.

One that has fascinated me personally in “spot the robot” is at Schiphol Airport where I go often. There’s now a wheelchair that, once you’re past security, has you sit down, enter your gate number, and it automatically drives you straight there before returning on its own.

Lucia Rahilly: Wow. The research also showed adaptive robotics accelerating in sectors that I found somewhat unexpected, like energy and sustainability technologies. What does that look like in practice? And what does it mean in terms of the direction in which we’re heading?

Sven Smit: In energy, it’s about installing equipment, which will require humanoid robots. For example, if you take almost every bit of manual labor that has some redundancy to it, it could, over time, be robotized one way or the other.

The complicated work—like refurbishing an old house, rewiring electricity, or something similar in energy—will still take time. But there are standardized jobs in these sectors as well that you’d expect to be the early part of the automation process.

Lucia Rahilly: How much of robotics adoption is about replacing human capital, as you just described, and how much of it will be about augmenting or changing what we do?

Sven Smit: I would say that whether robots are going to replace the work or add to the work is almost a choice. What we know is if robots are cheap enough, which some of the estimates suggest, they will make certain things cheaper and therefore demand will go up.

Classically, this has been framed as the stone mason problem. We could replace maybe half the stone masons with robotic stone masons and cut the workforce in half. Or you could say, “No, the construction of housing and offices will get so much cheaper that beside every current stone mason there will be a humanoid robot or another form of robotics to help them build bigger, better, and more houses for more people.”

In an ideal world, you get a twofer because every stone mason owns a robot, or because the robot works 24/7.

Sven Smit

Affordable housing is a huge problem. It looks like robots will be cheap enough to make certain jobs more affordable. So instead of replacing half the stone masons, every mason could have their own robot. In an ideal world, you get a twofer because every stone mason owns a robot, or because the robot works 24/7.

Business models will change by the day. People say full self-driving is a robot too. Does it replace all the Uber and taxi drivers? Maybe. But it could also bring a whole new set of jobs—running fleets of cars, cleaning them, positioning them, monitoring passenger behavior, doing logistics, package delivery, and so on. All of a sudden demand goes up. I think we often underestimate the demand equation when things get automated and become a lot cheaper.

Lucia Rahilly: And in the stone mason example, is this what we mean by “cobots”? What are cobots, and how should we expect those to begin to appear in the workplace?

Sven Smit: Yeah, I think cobots—the words and language will develop over time. The positive frame is that all automation, whether robots or AI, is a multiplication of human creativity, ingenuity, and production capacity. Which I think it largely could be and has always been the case.

If you think about farming, which went from 100 percent of the workforce to 2 percent, farmers became more productive based on the automation of the day—the tractor and a few other things. I see it like that.

Lucia Rahilly: What should leaders focus on now when it comes to robotics at a high level, acknowledging that the response is obviously sector-dependent?

Sven Smit: In general, with AI automation, and with that also robotics, the fastest learner will win. The idea that you know exactly how it will work is, I think, arrogant. But if you don’t participate, you don’t know where it’s going, or when to scale—when to go deep versus when to stay shallow and learn.

As companies go step by step asking what can be done by AI, they need to do the same with robotics: process by process, what can be done, and when. When is the right time to experiment, and when is the right time to scale. In the end, if you’re not learning in this space, you’ll be late.

The energy transition accelerates

Lucia Rahilly: Now let’s turn to another of this year’s trends, the energy transition. Demand for energy is rising, with AI emerging as a big driver. The need to power data centers has been much in the news. Tell us, why is the energy transition a top tech trend this year?

Sven Smit: We used to frame it as replacing old energy with new energy—that was the transition. Now the frame is that we actually need more energy. That’s because of AI, but also because of rising global wealth.

So if we’re in a world of more energy, you almost need energy beyond oil and gas just to meet the demand. And then we want it to be reliable, which is why you see the discussion about nuclear rising, because it’s a clean form of additional energy.

Everyone is looking for the next source of at-scale, stable, reliable electricity, which will have to come from a mix of old and new technologies that can scale very quickly.

Sven Smit

The fundamentals dictate that it will be part of the mix. If we need twice as much energy, we already need one unit of non-oil and gas energy just to cover the addition. That will have to come from solar, wind, fusion, fission, and so on.

The debate around energy—and especially electricity—is intensifying. The shortage of electricity is fast approaching. Everyone is looking for the next source of at-scale, stable, reliable electricity, which will have to come from a mix of old and new technologies that can scale very quickly.

Lucia Rahilly: Which elements of the energy transition are the closest, in your view, to being poised for rapid commercial growth?

Sven Smit: I would say it’s all driven by cost. The fastest single growth category at the moment is solar, and wind comes in second. People are taking nuclear more seriously. Storage technologies and the battery discussion come next.

Lucia Rahilly: And the research points to scaling challenges, beyond the technology itself, including supply chain and infrastructure needs. Where do you see the most significant obstacles to scale?

Sven Smit: It’s a multitude. The mega installations build rate that’s currently happening in China is almost every week. And we’re not at that rate outside China almost anywhere. We need a build rate that’s much faster.

So you see people restarting old plants again. That’s just on the power station side. In addition, because it’s electricity, you need to build more grids. You then have to have more storage or other ways to maintain reliability. It’s an investment project that requires massive build-outs that look like one of the traditional infrastructure build-outs we’ve had.

Lucia Rahilly: How optimistic are you that we’ll reach a more sustainable, more resilient energy transition in a reasonable time horizon?

Sven Smit: People are learning that energy makes the world go around. Once humans decide to build, they’ll get it done. When the race was only the discussion to replace oil and gas, which is in a way a positive frame for climate, but a negative frame for the old world, you get resistance all over the place. But if you say, “Well, we need two times the amount of energy and fast, plus AI and electricity tomorrow,” you go into a build mindset. My optimism is we are starting to move to the idea that the world needs a major construction project and energy. Mindset matters a lot.

Navigating the need for speed and affordability

Lucia Rahilly: OK, what is the first priority leaders should focus on when it comes to the energy transition, again, at a high level?

Sven Smit: I think we have two priorities. One is to build speed, which has supply chain issues in the energy build-out and transition. But the second thing that’s very, very important is to understand that we don’t build it too expensively.

Because affordable energy is what will drive progress, whether it’s AI or just human prosperity and so on. You can prioritize stuff that’s very expensive and you can prioritize stuff that is a little less expensive that has a chance to get cheaper.

I would hope we’re building stuff that is philosophically in the money in the midterm, but maybe it might need some subsidies in the short term to get done. Hopefully, though, we don’t build stuff that’s structurally out of the money versus low and affordable energy prices. So to me, the priority of build-out versus affordability while you seek clean energy is very, very important. You need to solve variability and security on top.

Lucia Rahilly: How could the current geopolitical volatility affect speed to scale?

Sven Smit: Geopolitics is driving the energy debate for people who want their independent source of energy and dependent sources of energy. You see a massive push for independent source build-out, while at the same time securing some of the sources. So geopolitics plays into the security of energy, but it also plays into the affordability of energy, because if you don’t have affordable energy, you don’t win the AI war, which prevents you from winning the security war.

Semiconductors power the AI era

Lucia Rahilly: Roger, welcome. Let’s talk about one of this year’s trends that may seem on its face a little less hot than, for example, agentic AI, but that undergirds much of what AI makes possible, and that is application-specific semiconductors.  Let’s establish that application-specific semiconductors are purpose-built chips optimized to perform specialized tasks, and that also offer great speed, energy efficiency, and performance. I would love your best quick and dirty on why these chips matter, and why they’re growing in importance for business.

Roger Roberts: The growth of graphics processing units, or GPUs, started out as simply the graphics accelerators for those PCs that helped you play those real-time games. And it turned out they were just the right processing architecture to allow for AI computing at very high rates of speed.

That led to a world where AI computing, for both training of models and inference, has created incredible demand for semiconductors. Now we’re seeing an ongoing level of innovation in this space that moves from thinking about these GPUs as only a general purpose way of delivering AI computing to more application specifics.

So that can mean, “Hey, I’m going to have particular types of semiconductors that might accelerate my adding of biological processes or the exploration of molecular performance inside a cell.” And what that kind of simulation can do unlocks lots of new possibilities for therapies. And so when we start to think about application-specific chips and look ahead, we get excited about the possibilities that can come from accelerating these very particular uses or workloads.

Lucia Rahilly: How do you see developments like data center expansion affecting chip development, and specifically disruptive innovations that might reduce the power intensiveness that AI requires?

Roger Roberts: Well, there are several things to think about. One is the chips themselves are maybe 1,000 times better over the past several years at pushing more computing through—for example, a piece of chip real estate. If I imagine my chips stacked up in very dense modules and racks, and those racks are placed inside a data center, that means I’m getting more output per unit of square feet or capacity or volume inside that data center.

It generates a ton of heat. So we start to get down to the physics of how do I move that heat off that chip quickly? That means we’re starting to see people innovate cooling technology. So instead of cooling the air by moving the heat away with airflow, moving to water-based or even more esoteric fluids that are used to circulate around the chip and move that heat away.

That can allow us to increase density so innovations at the model layer allow the model to use the chip capacity more efficiently. We might see another 1,000 times improvement there in terms of software efficiency just using the right data at the right time or exercising parts of a model’s architecture, not all of it, to deliver a great result. All those things taken together are bending the curve on power demand.

Lucia Rahilly: What kind of consequences should we be aware of given this power demand?

We’re going to see significant growth in power demand, significant pressure on both electrical generation capacity and our ability to move that through transmission to data centers.

Roger Roberts

Roger Roberts: The vast demand for processing more and more tokens, more and more generative outputs from AI usage, will fight against that and drive our power consumption up. We’re going to see significant growth in power demand, significant pressure on both electrical generation capacity and our ability to move that through transmission to data centers.

Lucia Rahilly: How could these chips be used to improve performance or cost efficiency?

Roger Roberts: One example could be in robotics or, as sometimes people now talk about it, as embodied AI agents. And so that can take many different forms, but ultimately the integration of multiple kinds of AI computing on that robot mean that I, in some sense, need to bring all the human senses together.

I’ve got elements helping me with language understanding, perhaps, but also with vision and navigation and being able to control myself inside my physical environment in an effective way. What that then means is that you might have different application-specific AI chips inside that robot that allow for these various functions to come together and be integrated as a whole.

Lucia Rahilly: That’s a great example. Super cool. What’s the outlook vis-à-vis talent in the semiconductor industry? What does the supply and demand ratio look like as far as talent goes?

Roger Roberts: Semiconductors are constrained by many things like capacity of fabrication, our ability sometimes to move them around the world in a supply chain, and they’re also constrained by the talented people not just to design them, but also to transition them to at-scale manufacturing.

There are only a few companies in the world that can design the equipment or who have the experience in really scaling up manufacturing to the levels and quality and performance required now. A lot of the talent are concentrated in a few companies and in a few countries. Talent becomes a very precious resource.

Lucia Rahilly: Yeah, particularly if more chip manufacturing moves to the United States, there will have to be some kind of capability-building funnel, presumably.

Roger Roberts: For sure. In the US, as we lost some of our chip manufacturing to overseas markets and providers, we’ve also had an erosion in that pragmatic, practical, hands-on talent. We have wonderful design capabilities, but the scaled manufacturing requires really trained people.

As manufacturers transition operations to the US and build more world-class fabs here, more upskilling of the human capital will be needed to support people working on the front lines, those designing and shaping the processes and practices inside those fabs.

Building digital trust

Lucia Rahilly: Now let’s turn to another of these trends, digital trust and cybersecurity. Give us the high level on this trend and what’s at stake as we advance into the AI economy. How does it affect, for example, customer choice or stakeholder support?

Roger Roberts: So fundamentally we see trust as critical to accelerating the path to AI adoption and impact, and not something that you do just because a regulator tells you to do it, but you do it because you want to create the most impact, the most adoption, and get your innovations out there into the hands of the world.

Lucia Rahilly: Is the rise in geopolitical volatility significant here? Or do you view that as a relatively constant risk?

Roger Roberts: All players in the world interested in AI adoption are going to care about trust, and they’re going to have to. It’s not necessarily necessary to have trust among countries in order for companies to care about trust. Now would it be better if we had more transnational collaboration and cooperation so that companies could see a more consistent set of standards and expectations around the world? Sure, that would allow for companies to operate with a common and simpler set of rules everywhere. But we haven’t seen that happen in other domains and yet have made a lot of digital progress. So I am still optimistic that economic forces will push us in the right direction.

Lucia Rahilly: Where are businesses in terms of adoption on the relevant technologies necessary to enable digital trust, and what kinds of challenges might they face if they’re not as far along as they should be in this area?

All players in the world interested in AI adoption are going to care about trust, and they’re going to have to.

Roger Roberts

Roger Roberts: Well, there’s so much to do, because the landscape keeps evolving. Let’s use agentic AI and particularly agentic commerce as an example. If I’m running a website, digital entities that show up on my site and attempt to inspect my products and prices used to be referred to as malware bots.

Now they could be someone whose agent is simply seeking to make a potential purchase. So we’re going to have to be able to differentiate between good traffic and bad traffic or good bots and bad bots. That will be a challenge. That means those bots are going to have to bring authentication, forms of tokens that allow for that site to let you in and behave like a human on the site. And the human has to trust that agent’s behavior to operate on that site within the intentions and guardrails that they’ve set for that agent’s behavior.

Lucia Rahilly: What does the talent picture look like in this area?

Roger Roberts: Talent in this area is going to be a challenge and a constraint. We’re going to have to train up lots of next-generation e-commerce talent to understand what it takes to create a good agent experience and not just a great CX [customer experience] on my site.

I’ve got to actually help commerce agents traverse and navigate the site. The good news is they can do that in human-like ways today that doesn’t require an entire revamp of your architecture. On the other hand, it doesn’t mean that what’s best for today’s model of human browsing will be the best for agentic commerce in driving sales through a sales funnel and closing transactions.

Explore a career with us