AI has yet to deliver the ROI many leaders expected. What are they getting wrong? “This is probably the biggest, most complex transformation we’ve seen—but it’s 80 percent business transformation and 20 percent tech transformation,” according to McKinsey’s North America Chair Eric Kutcher. “That’s different from how most people have thought about it.” On this episode of The McKinsey Podcast, Eric speaks with Global Editorial Director Lucia Rahilly about how CEOs can deliver on AI’s revolutionary potential—and meet this “legacy moment” successfully.
The McKinsey Podcast is cohosted by Lucia Rahilly and Roberta Fusaro.
The following transcript has been edited for clarity and length.
This is it—your legacy moment
Lucia Rahilly: Eric, today’s CEOs are confronting a welter of consequential changes. Let’s start with the big kahuna: AI. Talk to us about what the advent of agentic AI means for leaders.
Eric Kutcher: I tell everyone that this is the most exciting moment in my 28-year career. I call it the reimagine moment, the CEO legacy moment. As one CEO I really admire said to me, “This is the real Fourth Industrial Revolution.”
Watching your kids adopt this, watching your grandparents adopt this, watching everyone have an LLM [large language model] at their fingertips—it’s amazing. My wife doesn’t go to a travel agent anymore. She just asks, “What does Chat say?” Or my son, when he was a senior in the early innings of this technology, two years ago—he prepared for exams by taking every problem set he’d done and every test he’d been given, throwing it into Chat, and saying, “Make me a 90-minute exam, and then show me the answer key.”
Every one of our employees is using it—whether at home or at work, they’re using it. Unlike semiconductors, which took years, this happened overnight. It’s the most democratized technology we’ve ever seen. That’s also probably because of the scale of this technology today. This is probably the biggest, the most complex business transformation—but it’s 80 percent business transformation and 20 percent tech transformation. That’s different from how most people have thought about it.
CEOs who take this on will take their organizations to a different level. And CEOs who sit and wait—their companies aren’t going to exist. They’re not going to thrive. It’s that binary in terms of importance.
This is probably the biggest, the most complex business transformation—but it’s 80 percent business transformation and 20 percent tech transformation.
What about that ROI?
Lucia Rahilly: We’ve seen so much research on the challenge of translating AI investments into bottom-line results. Many companies are experimenting, but few are realizing meaningful ROI. How do you see leaders addressing the value creation challenge successfully?
Eric Kutcher: The reason we’ve seen these results is that we’ve been in the world of, “Let’s take this technology, deploy it, and watch good things happen.” And good things have happened. They just don’t add up. You can’t just rely on the technology—back to my 80/20. You have to take a step back and reimagine the process.
So if you’re a CEO, how are you approaching this problem? Leaders need to envision what they want this to look like in five years. That may be nothing more than high-level thinking, their own North Star. It may be how they articulate where they’re going organizationally to realize the magnitude of this change. And they’re all choosing very differently. Some say, “I’ve got to bring the whole organization along at once.” Others say, “If I can get one or two functions to do something magical, I can use those examples to challenge everyone else, and that’s sufficient.”
Want to subscribe to The McKinsey Podcast?
One leader told me, “I’ve got a few folks who are running ahead. That’s not OK. I need everyone to run at the same pace, and I’m not going to let the first three slow down.” Another said, “I want to triple the share price in four years.” He went to one of his functions and said, “I need you to go from 1,000 to 3,000 customers. And I don’t want it done the way we’ve done it in the past. Here’s my vision for how to get it done. Go make it happen.” That’s ambitious. That, to me, is the difference between those out in front and those who are waiting to see.
The other thing I hear about a lot from CEOs is fluency. The irony is that the youngest employees often know the technology better. I’m a tech guy. I was an engineer. But to some degree, I’m a Luddite relative to these kids. You’ve got to really want to learn—it’s such a big thing.
One CEO I love has a passion for golf, and he said, “I went to ChatGPT to figure out what shaft to buy. That was my eye-opening moment. That changed how I thought about everything I do at work.” So the mechanism for creating fluency—that intellectual curiosity and willingness to pause and say, “I want to do this differently”—that’s what they’re struggling with big time.
What’s the future for human workers?
Lucia Rahilly: As leaders start realizing more meaningful productivity gains from agentic AI, the next step will presumably be to reinvest some of those gains into growth. Do you see this leading to reductions in the human workforce?
Eric Kutcher: What will the future organization look like? The short answer is none of us knows. My two cents: The organization will feel a lot flatter. You’ll have human and agent hybrid workflows. You’ll have to figure out where to deploy agents: What’s the objective function? How do I evolve workflows where people and agents are responsible for output? You’ll need way more workers and judgment, and way fewer managers. So I think it’s a flatter organization.
I also think it’s an organization where people who are fast learners, who are willing to challenge, will succeed. And folks who just go through their daily routines—those jobs or tasks can easily be replaced.
One of the things I hear a lot is, “Gosh, the early stages of the career ladder will go away.” I don’t believe that, because as a CEO or leader, I’m not just thinking about this moment; I’m thinking about the future. I know I’ve got to apprentice people. I can’t just have everyone come in ready to play a senior-level, judgment-based role. I have to give them the opportunity to learn and experiment. And again, it’s more likely that these young folks, who are less experienced as employees, will have more fluency. I tell all early-career folks that when I started out, what I did over six weeks they now call Tuesday morning. And the problems now are more complicated. There’s a lot more to do than there was back then.
You’ll need way more workers and judgment, and way fewer managers. So I think it’s a flatter organization.
Alchemizing adoption
Lucia Rahilly: How should CEOs be thinking about balancing between new—if not AI-native, then at least AI-literate—talent versus existing teams that need to be upskilled to accelerate growth?
Eric Kutcher: I get this question all the time: “Do you have to bring everyone along, or do you just have to give them access?” And I think where my head has gotten to is that you’ve got to give them access—to the education, the tooling, et cetera. If they choose not to learn, you can be Darwinistic, because they’re not going to succeed in any environment going forward.
Personally, I’m trying to create more hands-on experiences. For example, I just got back from a two-day off-site where we talked about creating a Slack community so people can exchange ideas about their agentic AI experiences. Also, every training needs to be 30 percent how you’re thinking about AI and what employees do every day. The more you talk about it, the more you give people experience, the more they recognize they can’t ignore it.
Lucia Rahilly: Some say that embedding agentic in internally facing workflows first might be safer than, say, starting with the customer experience. Are you seeing that?
Eric Kutcher: I’ve gone back and forth on this one. Very few people get excited about internal change for the purpose of internal change. I think about it this way: If I’m a B2C company, can I change something around the customer? That’s what the organization gets its head around. The majority are there to serve that end customer, to drive toward a different outcome that leads to the company’s success. So I think you’ve got to do more to be more customer oriented because that’s what matters.
What about the risks?
Lucia Rahilly: Let’s talk about AI governance, particularly vis-à-vis agentic. How should leaders be thinking about responsible AI at scale?
Eric Kutcher: I’m not the expert on this, to be perfectly frank, so we should be careful how deep we tread. I think “responsible AI” is a very big and potentially loaded term. It is absolutely important. But people think about it on many different levels.
Some level of responsibility relates to thinking about AGI [artificial general intelligence]. What will it mean for humans, and for jobs, if all of us can be replaced? I don’t think we’re anywhere near that risk anytime soon, and I happen to believe in human ingenuity. But part of responsible AI is thinking about the broader good for humanity. That’s one definition.
Then there are obviously a bunch of other risks—one of which is that we’re going to create a level of technical debt as a result of this. If we don’t have the right governance, 15,000 agents today will go to 30,000 agents tomorrow. Do we have to retire these agents? As a leader, I don’t want people to call on the wrong ones, because the wrong ones have issues. So how do I manage the life cycle? This is a real issue that CIOs [chief information officers] will have to deal with in a real way.
So responsible AI has different levels. We’re going to need real governance, and depending on which part of the problem you’re trying to solve, you’ll bring in different parts of the organization.
Technology in a shifting world order
Lucia Rahilly: Let’s turn to geopolitics, given that the shifting global order remains a top priority for CEOs. How are you advising leaders to plan for growth in a world where geopolitical dynamics remain in flux—in some cases with tangible ramifications, such as supply chain reconfiguration?
Eric Kutcher: Most of what leaders think about is, “What’s the right answer for me and my organization over time?” And if you talk to most CEOs, they will say, “We have found a reasonable equilibrium.” I do think there are questions around whether the trust that has existed between historical allies is still reliable. But those questions sit more at a governmental level. I’m seeing people continue to make the investments they know will be the right investments.
Lucia Rahilly: There’s a way in which tech itself has become geopolitical, given increased focus on sovereign AI, data sovereignty. What should CEOs be thinking about when it comes to global strategy and competing in a world shaped by national tech priorities?
Eric Kutcher: One question I sometimes get relates to decoupling. Will we have one stack or multiple? How will this evolve? I believe we’ll see two stacks because of the way both are competing to win. We’re going to have very difficult solutions in that space, and I suspect they will exist in different markets because of geopolitics and some of the protections you’re describing. But there will still be collaboration due to the development of multiple stacks.
Then you get into the implications of sovereign AI or data restrictions. I think we’ll continue to live in that world, and that it will only become greater. Some places want not just access to more technology that stays within their region, but more of their own local technology that lets them say, “We’re not reliant.” People are trying to avoid tricky geopolitics.
So there’s the question of where data and AI modeling sit, and how much that data can be used outside, which I think is incredibly manageable—maybe slightly more expensive. That’s different from, “I only want to use technology developed in my region.” That’s harder because it runs a real risk. That said, I don’t think it will play out, because the economic impact of not having the best available technology is likely to be much greater than the risk associated with using someone else’s. So then you’re stuck in a world where, sure, you have more complicated data center constructs, more complicated data residency constructs. But again, it’s all manageable. It’s a complicating factor, not a game changer.
How leaders are meeting the moment
Lucia Rahilly: You’ve talked a lot about leading through this interval of uncertainty. What other leadership mindsets and behaviors do you see among CEOs who navigate this kind of challenge particularly well?
Eric Kutcher: You can’t do these jobs unless you absolutely love what you do. You have to love the organization. You have to believe in what you’re doing. And I think the great ones today have gotten past “I’m going to say what I think people want me to say” and say what they believe, which helps. It’s easy to say what you think people want you to say, but then the winds switch and you’re stuck. If you believe in something, you’re not worried about which way the wind is blowing because you believe in it.
The great ones are also constantly learning. I love it when I see a CEO spend time with a junior team to really understand what they’re doing day-to-day. You want to get to know tech? Go spend time with the folks in your organization who are great at using it, ask them questions, and watch their eyes light up when they get to interact with you and teach you something. Those are the leaders who really understand what’s going on in their organization, because they take the time.
I also think great CEOs now are willing to be more vulnerable. In the past, you might have felt you had to be the command and control, but now I think it’s OK to say, “That’s a great question. I don’t know the answer to it.” Or “That’s a good point. I’ve got to go think about that some more.” It’s liberating and inspiring for an organization to see their leader as human and approachable.
Lucia Rahilly: AI is presumably revving up the pace of executive decision-making, in addition to juicing productivity and enacting other transformational changes. How are CEOs approaching the need to make strategic moves at this new speed—at pace and at scale?
Eric Kutcher: I’ll probably get into trouble and receive some hate mail, both internally and from my CEO friends, but leaders actually spend less time on strategy than you think. Most of their time is spent on change management—moving the organization from A to B, versus deciding to go from A to B. The beauty is now I can execute on B a bit faster, but I still have to take my organization through that process. What may change is, “Is the ambition that I have big enough? Is it bold enough?” That’s what most good CEOs spend their time on.
I love it when I see a CEO spend time with a junior team to really understand what they’re doing day-to-day.
What’s on the horizon
Lucia Rahilly: What are you most excited about, and what are the leaders you work with most excited about, as we embark on the new year?
Eric Kutcher: I think this is the most interesting transformational moment we’ll live through in our professional careers. To be at the beginning of this, to shape the way the change takes place, is unbelievable.
I’m also optimistic about the rate of adoption because I think organizations are starting to realize, “I can’t just insert this technology. I have to change the way I operate.” And the beauty of this is that you can set bold, audacious goals. It is so fun to go after something that no one ever thought possible.
I sometimes get the question: “Is AI a bubble?” I have no idea if the valuation is a bubble. But the moment itself is not a bubble. The moment is real, and it’s happening, whether or not the valuations are right. The question is, “Do I see a world that looks different ten years from now as a world I get excited about?” Conversely, I don’t like it when people are overly excited. It makes me nervous, because that’s when something goes awry. But I think this next decade is going to be wild, and it’s going to be exciting.
Lucia Rahilly: If you had one piece of advice for junior folks starting their careers in this environment, what would it be?
Eric Kutcher: I’m not good at just one; I’ll give you three. First, never lose your intellectual curiosity. The thing I worry about is that it’s super easy to ask AI to do something for you. You’ve got to ask not five whys, like we used to always do, but 50 whys.
Second, never stop learning. Many folks starting out today already understand what this technology can do. How can you maintain that learning mindset? It’s another form of intellectual curiosity.
And third, never lose your own voice. I can tell when someone’s written something using AI. Find your voice, make it yours, and then use everything around you to help you communicate. If you haven’t figured out how to express yourself in your own way, you’re not thinking critically.


