The McKinsey Podcast

The rise of the human–AI workforce

| Podcast

Most leaders herald the promise of AI, but many employees see it as a looming threat—a modern echo of Annie Oakley’s classic lines: “Anything you can do, I can do better. I can do anything better than you.” What does it take, then, to lead constructive partnerships between humans and AI agents at work? In this episode of The McKinsey Podcast, McKinsey Senior Partner Alexis Krivkovich and McKinsey Global Institute Partner Anu Madgavkar speak with Global Editorial Director Lucia Rahilly about new research on what AI can and can’t do, where humans will continue to add value, and what needs to happen to help all of us work side-by-side with agents and robots successfully.

The McKinsey Podcast is cohosted by Lucia Rahilly and Roberta Fusaro.

The following transcript has been edited for clarity and length.

Can AI really do what humans do?

Lucia Rahilly: Alexis, Anu, kudos on your new report Agents, robots, and us: Skill partnerships in the age of AI. One aspect that stood out for me was that you disrupted the conventional us/them binary—humans versus agents, humans versus robots—and construed us as being in a constructive partnership with these technologies. What is it about that partnership that immediately resonates with you—or makes you uneasy?

We can’t stay at that same base level on each skill and expect to add a lot of value. We need to enable everyone in the workforce to work with AI and get better at those skills.

Alexis Krivkovich: What’s most exciting is that this technology hits every role and every aspect of how people spend their days, across every function in any industry you can name. Frontline workers, factory workers, white-collar workers, all the way up through the CEO suite—the scale of the change is already here. The real possibility, and the huge challenge, is how to absorb that much change this quickly.

Anu Madkavgar: We found that the vast majority of skills are shared—meaning skills that AI agents and robots can bring to the workplace but that human workers also use. This is a source of uneasiness because it means we must use AI to “super skill” ourselves. We can’t stay at that same base level on each skill and expect to add a lot of value. We need to enable everyone in the workforce to work with AI and get better at those skills. It’s a massive opportunity, but also a responsibility, to upskill people to use AI and be better at what they do.

Lucia Rahilly: The headlines can be apocalyptic on AI vis-à-vis the future of work and job loss. And in this MGI research, you say that more than half of work hours in the US could be automated across a broad range of professions. If this isn’t about job loss, what is it really about?

Anu Madkavgar: The research shows that, based on currently proven technologies and capabilities, more than half of current work hours could be automated. But we shouldn’t lose sight of the fact that human beings are vital. Almost half of that work is beyond the capabilities of today’s technology. A lot of this work is cognitive, social, emotional, and interpersonal—and some of it is physical.

More important, as we adopt and use technology in workflows, in business processes, and in things people do day to day, we’re finding that it creates new kinds of work for the people in the loop. That might involve new tasks to guide, prompt, validate, refine, or build on what AI is doing. It may also involve completely new kinds of demand: things that weren’t possible before but that we now have the technology to do, and at higher quality—much more R&D, for example, or cleaning a facility twice a day instead of once a week.

What happens to humans in an AI economy?

Lucia Rahilly: What do you think might define high-value human contributions in the next, say, five to ten years?

Anu Madkavgar: There’s a whole set of skills involving critical thinking—aspects of problem-solving, negotiation, conflict management, team management. In many aspects of those, humans will likely use AI but not be substituted by it. There are different examples across industries. In healthcare—for example, in trauma care, first aid, or complex surgery—robotic and agentic solutions may assist, but humans would continue to be vital. But administration, customer onboarding, or communications with customers—those could get much more automated.

Lucia Rahilly: I was interested to learn from the research that radiology has grown since the acceleration of AI in diagnostic work. That seems counterintuitive.

Anu Madkavgar: New things become possible when a scarce skill gets unlocked. In the case of radiology, the fact that you can use technology to scan images, process data, and bring out inferences so much faster helped unlock supply, make it more available, and meet an untapped need. Think of the smartphone revolution. We discovered so many new needs just because smartphones became available—the app ecosystem, thinking about social media and marketing in a very different way.

Alexis Krivkovich: In addition, there are skills that are going to matter more. Social–emotional skills, coordination skills, process management skills that existed previously will be at a premium, and we’ll want people to spend more of their time on those areas as AI takes other things off their plate. And there’s a new set of skills leaders in particular will need to meet this moment, like a voracious learning mindset.

Want to subscribe to The McKinsey Podcast?

Lucia Rahilly: Alexis, you lead our People & Organizational Performance Practice, and you’ve spoken so much about the trend toward skills-based hiring. I’m wondering how that approach fits here as roles begin to shift.

And there’s a new set of skills leaders in particular will need to meet this moment, like a voracious learning mindset.

Alexis Krivkovich: Last week, I was with two dozen CHROs [chief human resource officers] from leading companies in North America, and many of them were discussing this conundrum: “I’m now facing a moment where from a hiring standpoint, every job description needs to be rewritten because some combination of the things I used to look for are no longer nearly as important, whereas others matter a lot more.” But practically, what they also said was “I don’t think I’m getting better candidates. I’m getting candidates who use the same tools I’m using to predict what my tools will look for, so they’re setting themselves up to be selected.”

I think some of us may revert to old-school, analog approaches: sit in an office and take a personality test or do a puzzle. We see folks asking software engineers to live code. The interview is for you to deliver a product, not talk about things you’ve delivered in the past.

Why has ROI proven elusive—and how can leaders deliver on it?

Lucia Rahilly: All these evolutions in the distribution of skills across this hybrid labor pool assume that companies capture the value of AI. The research puts that value at almost $3 trillion annually by 2030. But we also see that most companies are not yet seeing meaningful gains, at least at the enterprise level. What needs to change?

Alexis Krivkovich: The real opportunity sits with how you take what’s a more “pilot and point” experimentation in organizations focused on business value, and explode those into big, at-scale “bet the company,” “change the business,” “points of EBITDA” kind of opportunities. And I’d argue that for most organizations, those starting points are few enough that what you really need is the leadership team to align around a value thesis.

Where is the opportunity here? And then really marshal behind it in a way where you can attack it at scale. And what I mean by that is, I think in most organizations, the biggest opportunities cut across more than one leader’s domain. They might start in supply chain, but then immediately connect into the front end of customer service, delivery, ordering, processing, and back into areas of manufacturing.

Anu Madkavgar: Everybody needs to get more familiar with how to work with AI tools. But the question is whether the potential unlock will be transformational. So it’s probably the T-shaped approach, or maybe a series of T’s, where you need horizontal capability building, but you also need to place a few important bets to reimagine end-to-end processes and do things differently.

You also have to think about where the market will lead you and what parts of your profit pool are most sensitive to this disruption—whether it’s an internal possibility or a competitor threat, or frankly, just what the market is saying customers will want. If you’re in retail, it’s entirely conceivable that your customers want to do agentic commerce. If you’re a bank, it’s possible that your midsize to large-size clients might start wanting to use agents to interface with the finance function.

How can leaders make AI–human partnerships a success?

Lucia Rahilly: Again, this research anchors on partnership and the way most of us will work with AI. What will humans need to do to make that partnership successful in practice?

Alexis Krivkovich: It’s interesting to think about the future if this technology deployment is as widespread as the research predicts is possible. Everyone will need to understand how to deploy a new interaction model, but not everyone will have to understand all the ways AI works. I don’t know how my phone does what it does, but I know how to use it.

But particularly with tools like agents, when we’re asking them to do bodies of work, we will need to learn how to validate, provide the right judgment, redirect as Anu described, work iteratively, and test and learn. In a lot of organizations, roles haven’t had that iterative aspect. You focused on a specific set of well-defined tasks that you executed over time. You learned how to do that with precision, and that’s what was rewarded. Now there’s going to be a much higher expectation of experimentation, including real judgment about what worked and didn’t. That’s a very different day-to-day expectation.

Lucia Rahilly: What you’re describing is almost managerial. Suppose I’m leading a hybrid team where some of my colleagues are agents, learning continuously and working 24/7, sometimes outperforming humans who get sick or tired. How does my leadership of that team need to change?

There’s going to be a lot of flux. Part of being a good manager will be having an appetite for that, being resilient through it, and being able to be invested and creative through that process.

Alexis Krivkovich: They will fuel the ability for me to work continuously at a different scale across a broader array of things.

Anu Madkavgar: There’s a frame of reference managers have based on what work looks like—productivity levels, KPIs, optimizing workload, quality, and output. All these things will be questioned in fundamental ways. If you have an agentic solution that can generate 5,000 reports overnight, you’ll hit a new bottleneck because human capacity to review those reports won’t exist. Every time you’re up against a new bottleneck, there will be innovation and technology that then addresses that bottleneck. So we’ll be in a transition period. There’s going to be a lot of flux. Part of being a good manager will be having an appetite for that, being resilient through it, and being able to be invested and creative through that process.

What does success look like in the AI era?

Lucia Rahilly: What if we were to redesign education and training from scratch? What would we stop teaching, and what would we double down on?

Anu Madkavgar: It’s tempting to say, “AI can do so many things. Why should we learn how to do them?” But the analogy is, just because I have a calculator, should I not develop quantitative ability? Just because I have a GPT that can write something for me, should I spend no time trying to write myself?

There is some element of foundational cognitive or physical ability that comes just from doing an activity. We’re going to have to think carefully about what part we need to preserve versus what part we don’t.

We may also, over time, have less focus on specialized learning and more emphasis on more transferable, generalizable skills and capabilities, because the workforce will be in flux. If you spend five years going deep on one narrow area or one particular skill or certification, that area or skill may not be relevant by the time you’re in the workforce.

Alexis Krivkovich: Careful, Anu. You’re going to dissuade everyone from PhD programs.

One exciting thing about this reskilling moment is that AI enables us to do it better. A lot of the feedback on workforce learning is twofold. First, you can get AI tailored to you, so it ingests all the data points about how you perform and builds a point of view on where you have skill gaps or opportunities and how you can close them. Second, it can introduce them when you’re doing the work: “I see you have 30 minutes free. Do you want to use this time to send out these five notes as follow-ups to customers? This is a common practice for people developing these sorts of relationships. I’ve taken a first pass based on your tone of voice and what best practice suggests will lead customers to respond.”

Lucia Rahilly: Many are calling this an existential moment for leaders. Our managing partner for North America, Eric Kutcher, recently described this as a CEO legacy moment, a make-or-break moment. Which risks worry you more—moving too fast with agents or moving too slowly?

Alexis Krivkovich: Moving too slowly. Speed is a strategy in and of itself. The biggest risk companies face is waiting for more clarity—because there’s so much ambiguity, so much unknown and unproven—before they make bets. They move with a belief that they don’t want to waste money, and that’s a dangerous position to be in. By the time you get enough clarity to know definitively where to go, you’ll be far behind.

Anu Madkavgar: Of course, people have concerns about speed. But whether it’s risk management, or an understanding of ethics and compliance in the context of AI, or regulation, or public education about how to use AI responsibly on all these fronts, we’re going to have to move faster to tap into this opportunity.

Lucia Rahilly: Looking ahead, say, ten years from now, what would convince you that humans and AI have built a partnership that works across the board—for employees, for organizations, and for society?

Speed is a strategy in and of itself.

Alexis Krivkovich: If AI augments and unleashes human potential, rather than replaces it. If we’re on the other side of the uncertainty and the fear curve. If we’re in a place where, just like a smartphone or a laptop or the internet, AI is a positive part of our daily life. And if the issues we grapple with are how to maintain the right controls, the right equity for folks to have appropriate access—and not whether AI should exist or not, and whether it’s a force for good.

Anu Madkavgar: For me, it’s about this new way of thinking about work as something we enjoy and find fulfilling, because work is so critical to the human project. If AI has the effect of enhancing the quality of work, the experience of work, then the other thing I would love to see is whether the benefits of AI have been democratized. Has everyone really felt them? Have we seen transformational effects and wider access and better quality? That would be real success.

Explore a career with us