How can businesses deploy robotics to boost efficiency and tackle productivity and labor challenges? And how far away are we from a future where robots are picking strawberries and taking verbal instructions? McKinsey Partner Ani Kelkar joins McKinsey Talks Operations host Daphne Luchtenberg in a bonus episode to dissect the progress and potential of today’s robotics developments and applications—and how the future of robotics is carving a path of business innovation. An edited version of the conversation follows.
Daphne Luchtenberg: We recently recorded an episode of McKinsey Talks Operations in which we explored the future of robotics and automation with experts Ani Kelkar, Etienne Lacroix, Marc Theermann, and Ujjwal Kumar. The discussion highlighted the growing interest in robotics driven by AI advancements and the need to address productivity and labor challenges. Today we’re delighted to have Ani back with us for a bonus episode where we’ll explore a bit more about the potential for general-purpose robotics, the technical challenges, and some of the ethical considerations for their implementation. Welcome back, Ani.
Ani Kelkar: Excited to be here. What a delightful conversation with Etienne, Marc, and Ujjwal.
Daphne Luchtenberg: Absolutely. And as we have this bonus episode, I’d love to first ground ourselves. Let’s go back to the beginning. Can you explain what we mean by “general-purpose robotics” and “embodied AI”? What are the key capabilities they bring, and how do they differ from traditional robotics that we think of today?
Ani Kelkar: Absolutely. There is a lot of excitement about robotics, particularly about the potential of AI and robotics today. When we think about general-purpose robotics, it refers to machines that are designed to perform a wide range of tasks across various domains rather than traditional robots, which were optimized for a single function. So think of robots that can adapt like a human worker at a factory and that can do multiple tasks—like picking, inspecting, painting—rather than a single application, like an industrial arm doing one job repeatedly at a high volume.
And when we think about embodied AI, it’s really the intelligence that enables this adaptability, and it combines things such as perception and decision-making, as well as physical interaction within the real world, including interactions with humans and collaborative actions between the robots and humans. Oftentimes this is learned through a trial-and-error or simulation perspective. Unlike traditional robots, which were very much rule based and required structured environments, general-purpose robots can handle that ambiguity, navigate complex spaces, and learn new things.
Daphne Luchtenberg: Great. Let’s bring it even further into specific use cases—I’m thinking healthcare and agriculture. How do you see this innovation driving real value in those places, and can we already start to point to some real use cases?
Ani Kelkar: The excitement around general-purpose robotics is about the ability to span domains by taking critical building blocks of technology from one domain, such as automotive manufacturing, to another, such as healthcare or agriculture. And in those industries, we see general-purpose robotics as being poised to fill critical labor gaps. In healthcare, as an example, there are companies such as Diligent Robotics with its Moxi robot that has been helping hospitals with logistics, freeing up nurses for more time focused on patient care rather than moving the documents or objects required for operations. In the agriculture context, you have companies such as FarmWise Labs that are using AI-powered robots in weeding applications. They help reduce the use of pesticides and labor dependencies.
The excitement around general-purpose robotics is about the ability to span domains by taking critical building blocks of technology from one domain to another.
What sets these examples apart compared with traditional robots is their adaptability. These robots interact with humans, they perceive their environment, they decide, and then they act in dynamic, often messy environments. This is really the shift that’s making general-purpose robotics impactful for healthcare and agriculture.
Daphne Luchtenberg: Fascinating to see that already working in practice. So what would you say are the main technical challenges as you develop these innovations and start to put them to work, and how are they being addressed?
Ani Kelkar: There is a lot of work being done in academia and industry on addressing some of the technical challenges around generalization of the embodied intelligence within robots, defining the right safety use cases and safety behaviors, making sure that the simulated environments in which these robots learn are good approximations of the real world, and bridging that sort of simulation-to-real gap. And finally, making these robots energy efficient, so that they have cycle times where they can be deployed usefully.
Subscribe to the McKinsey Talks Operations podcast
So think about these general-purpose robots. They’re far from controlled environments like the traditional robots. They’re in unpredictable settings. They need to have cognitive intelligence to understand the environment, understand how the environment might respond to what the robot’s doing, and understand how the robot needs to interact with its environment, whether with people or other machines. And so there are lots of techniques—whether using large-scale foundation models similar to what we saw with ChatGPT or techniques to bridge the sort of simulation-to-real transfer—that are being advanced by companies like OpenAI, Google DeepMind, and NVIDIA that are really pushing the envelope when it comes to generalization. And moreover, there is significant focus in academia, particularly around developing the safety use cases for these robots.
Daphne Luchtenberg: When it comes to safety, there are ethics that should be considered when you’re integrating these general-purpose robotics and embodied AI into everyday life. How can we ensure that they’re deployed responsibly and that there are guardrails around both their development and how they’re used in these real-life settings?
Ani Kelkar: That’s a great question and an important concern for all of the leading players and researchers in this space. So beyond safety, there are three other core concerns: bias, privacy, and displacement. For example, if a general-purpose robot learns from real-world data, it may inadvertently absorb certain societal biases or act unpredictably. And that’s why you have institutions like the Partnership on AI and the IEEE calling for robust auditability, human oversight, humans in the loop, and open data sets. Responsible development of robots means designing with transparency, ensuring the AI systems can explain their decisions, and creating regulatory frameworks that balance innovation and accountability, particularly as these systems now start entering healthcare or education and even public spaces.
Responsible development of robots means designing with transparency, ensuring the AI systems can explain their decisions, and creating regulatory frameworks that balance innovation and accountability.
Daphne Luchtenberg: The thing that keeps coming to my mind is the question around energy and also battery storage: What’s the time that a robot will have available to them with the energy that’s there today? And as we’re starting to ask these robots to be so much more intelligent and really drive and use gen AI, which we know has high energy consumption, how is this arena thinking about that?
Ani Kelkar: We have multiple companies trying different approaches. The traditional general-purpose-robot deployment is so energy intensive that these deployments are anywhere between 30 minutes and two hours of run time. Now, that’s not really suitable to be a factory worker that works in eight-hour shifts. So you have companies like Mentee Robotics that have experimented with swappable batteries. You have other companies that are thinking about tethered robots for specific tasks—so they’re tethered to a power cable rather than powered by their batteries. There are many companies that are innovating on this front, which gives me optimism that we’re not too far out.
To bring some more of the technical challenges to life, I was speaking with a leading researcher in the robotics space about an element around forward-dynamics prediction. Think about a robot that’s being deployed in an agricultural context to pick strawberries. Not only does the robot need to identify what the strawberry is, it needs to detect where it’s going to grasp the fruit and how it’s going to cut. Moreover, the robot needs to understand, in its attempt to grasp the fruit, how it’s potentially going to displace it and then be able to respond to this displacement, which is a thing that humans do intuitively. You just learn that as you go to hold objects, sometimes those objects move. But the robot needs training data and techniques to be able to learn that, which continues to be a challenge because a lot of that data doesn’t exist.
Another way to think about the technical challenges is how we interact with our environments. We often interact with our environment through gestures or voice. So companies, including Figure and OpenAI, have started to work toward foundation models that combine vision, language, and action. That way, you can communicate instructions to general-purpose robots, which then are able to interpret and then translate those into actions in the real world. If you ask a robot to pick up an apple, it can look around its environment. It knows what an apple is; it knows where to find it in its environment and then take an action against it. All of these things are there—the bleeding edge of robotics research today that we’ll soon find in commercial-scale deployments.
Daphne Luchtenberg: Very interesting. Ani, we could talk about this for hours, but this is really just intended to be a good, strong primer. So let me finish with a final question for you. As you’ve been explaining some of these use cases and bringing some of these stories to life, what do you think is the most promising future development in this field, and how do you see these technologies evolving over the next decade?
Ani Kelkar: I’m very excited about the future of robotics. I think we’re on the cusp of robotics becoming truly adaptable and as capable in the real world as we’ve seen in the lab. And I think the technology that accelerates it is really the integration of foundation models within robotics. Think of the OpenAI ChatGPT moment. I think we’re about to have something similar in the robotics domain that will be transformative.
I think we’re on the cusp of robotics becoming truly adaptable and as capable in the real world as we’ve seen in the lab.
Coupled with advances in AI, we’re seeing a rapid acceleration in sensing technology. We no longer have to rely only on cameras. We have haptic sensors and touch sensors that give robots a better perception of the environment around them. And the plethora of start-ups and mature companies that are taking this on, from the likes of Tesla to Boston Dynamics to Figure to Agility Robotics to 1X, leaves me with a lot of optimism that over the next decade, general-purpose robots will move from novelty to necessity, especially in sectors where there are significant workforce shortages and a need for business resilience. And the future of robotics isn’t about replacing humans. It’s about augmenting in a way that’s scalable and intelligent and allows for an elevation of the work that humans do.
Daphne Luchtenberg: Fantastic; thanks so much. You’ve been listening to McKinsey Talks Operations with me, Daphne Luchtenberg. If you like what you heard, subscribe and stay tuned.


