Transforming R&D with AI: Breaking barriers and boosting productivity

| Podcast

Accelerated innovation has been the driver of economic prosperity in the past hundred years, but today, R&D faces a range of hidden obstacles. Each dollar spent on R&D delivers less innovation over time. In this episode of McKinsey Talks Operations, host Daphne Luchtenberg speaks with Ravi Rajamani, managing director and global head of AI Blackbelts at Google Cloud, and Ben Meigs, a McKinsey associate partner, to discuss how AI is reshaping the R&D landscape.

The conversation explores how AI can boost productivity, from accelerating processes and enhancing design generation to streamlining validation. Recent McKinsey research shows that AI could substantially accelerate R&D processes across industries, but realizing the potential of AI to accelerate innovation also requires organizations to change.

The following conversation has been edited for length and clarity.

Daphne Luchtenberg: I’m excited to welcome Google Cloud’s Ravi Rajamani, managing director and global head of AI Blackbelts, as well as Ben Meigs, an associate partner in McKinsey’s product development group. They recently got together on a panel at McKinsey’s annual R&D Leaders Forum and had such a great conversation that we wanted to bring those insights to our McKinsey Talks Operations audience. Ravi and Ben, great to have you both here today.

Ravi Rajamani: Thank you, Daphne, and thank you, Ben. I’m happy to be here.

Ben Meigs: Great to be with you.

Daphne Luchtenberg: Ben, why don’t you kick us off? Tell us a little bit more about the trends you’re seeing in AI and how it’s being used in R&D.

Ben Meigs: We’ve seen a real shift in the past 12 to 18 months, with significant uptake for AI in engineering and R&D in hardware industries like auto, aerospace, and heavy industrials. Most people know that gen AI has a strong track record in software engineering. The hardware domain is more challenging for many reasons, including the fact that hardware products have a major software component that needs to be integrated with the hardware. But we are seeing AI progress accelerate.

Most companies in these spaces are at least piloting AI for their hardware engineering teams. And the more mature ones are actively using a handful of use cases regularly. These range from customer insights and research, where companies are testing concepts much more rapidly and in a more analytical way, to design generation. Things like prompt to CAD 3D models. They’re being used in validation, both reducing physical testing and streamlining simulation. And there’s an emerging use case for agent orchestration around all aspects of the digital thread, where product data lies throughout the life cycle. There’s all sorts of exciting, groundbreaking work that’s happening, and it’s real.

Daphne Luchtenberg: Very interesting. Ravi, this is where I’d love to bring you in. We’re really excited to have you on, since Google is absolutely a leader in adopting AI into your own engineering practice. As your CEO, Sundar Pichai, stated recently, 30 percent of Google’s code is now generated by AI. Can you talk a little bit about how Google approaches this and what it took you all to get there?

Subscribe to the McKinsey Talks Operations podcast

Ravi Rajamani: Yes. Sundar announced publicly that 30 percent of all new code at Google is now generated with assistance from AI. And this figure continues to increase. It was over 25 percent just a few months ago. This underscores what we see as rapid internal adoption by Googlers and the effectiveness of our AI tools. A lot of times, customers ask me how I interpret that 30 percent.

I say the developers are leveraging AI-generated suggestions for nearly one out of every three code changes. While this percentage of AI code is impressive, at Google, we still continue to emphasize the role of human oversight. The code is reviewed and accepted by engineers, highlighting that it’s profoundly augmenting human work across many different industries and many different use cases.

What I have seen from our internal teams is that adoption was heavily influenced by the amount of trust people have in this AI-generated code. And this absolutely influences the tool adoption. And when you say trust, one end of the spectrum could be zero trust. The other end could be 100 percent trust.

I don’t think both are right. That’s not appropriate. I think you want to find a happy medium, and this is not unique to AI. Today, when human developers build code, we have 100 percent code reviews, so it’s the same thing. We want to do it for our AI-generated code, too. So, processes must evolve. AI is evolving so fast. But our overarching vision for application development is deeply centered around how you can embed AI throughout the entire software life cycle.

We’ll talk about some of the use cases that Ben mentioned. The learnings translate very nicely. But the idea here is that this is a holistic integration that reflects more of a strategic imperative rather than “AI is just another tool. Start using it.” Our expectation is that all Googlers would embrace these tools, signifying a fundamental shift in how our work is performed.

Daphne Luchtenberg: Ben, let me come to you. Ravi has described how this shift is taking place at Google. Are we seeing these developments and shifts emerge in a similar way in other organizations?

Ben Meigs: We’re seeing them in pockets. One area where there’s more maturity is the simulation domain. There was a stat from NAFEMS, which is the International Association for the Engineering Modelling, Analysis and Simulation Community, that 30 percent of all of the new methods published by simulation researchers were using AI for simulation. And we’ve seen companies that have replaced large portions of what is traditionally a physics-based simulation workflow, so very compute heavy, with AI surrogates. So there’s more maturity there, but I think we aren’t really seeing most companies at the sort of maturity that you’re talking about, Ravi, when it comes to hardware engineering in these complex products.

Daphne Luchtenberg: That’s really interesting. Let’s shift gears a little bit and talk about AI implementations at companies that have to incorporate both software and hardware engineering into their products. The model landscape is changing quickly, and numerous major models have been released over the past few months. Ravi, what are you seeing in terms of these models that excel at research and engineering workflows? And how should companies think about model selection?

Ravi Rajamani: The pace of innovation in this market is insane. I’m a practitioner, and every day I see new models, new tools that just keep getting better and better. In the brief period that these AI models have been available, across software and hardware domains and across industries, organizations of all sizes and market segments are not just experimenting. We are starting to see customers put this into production across their work, across their domain, and doing so at a speed rarely seen with new technology. This is just taking off and taking off fast, and that leads a lot of C-level folks to think about questions such as: How should I be evaluating progress? How should I be thinking about integrating AI into the R&D life cycle? How do I automate repetitive tasks? How do I accelerate complex processes?

Our strategy at Google is anchored in what we call a platform-first approach, because we recognize models are going to come from all over the place. We have first-party models, part of our Gemini family. 2.5 Pro is the latest and comes in different flavors to cater to different customer requirements for performance, speed, cost, et cetera.

As part of our platform, we also offer third-party models such as Anthropic Claude and a whole suite of open-source models, because we are also seeing that a lot of customers have unique use cases, especially in regulated markets, where they want to combine a cloud model with something that could run at the edge. We’re starting to see some really interesting use cases, but across the board, our customers are asking for choice and optionality built across the platform, and that’s really what our strategy is.

Daphne Luchtenberg: Fascinating. Ben, coming back to you. With this proliferation of choice, you’ve got to make some decisions. How should R&D departments think about their AI investments? You’re working with many companies that are navigating this question. What are you hearing from them?

Ben Meigs: This is the really important question that everybody’s asking right now. And I think I’ll just echo a little bit of what you said before, Ravi, on the point around tech and model choices. I think we’re seeing the same thing—no company wants to be locked into a certain vendor because of the pace at which this is evolving. The organizational capacity to test and learn and ingest new technology and swap it out is the muscle that companies need to build.

When it comes to what you’re going to fund—in terms of what domains, what functions, what use cases, that’s the question. At McKinsey, we talk a lot about the importance of an ROI-based approach to prioritization, both for R&D projects and for other investments. This is super important.

With AI, there often isn’t a good basis of estimate for savings, because it’s all so new. So we find the best way to demonstrate the value of an AI use case is with a simple, lean, proof of concept where you measure the impact on productivity, the quality of output, and other business or technical KPIs that you’re seeking to achieve with that use case. Companies do need some budget that’s just for experimentation.

You don’t want too high a bar for just getting your foot in the door and doing that testing, so you can move quickly. Where you need to bring in the high rigor is when prioritizing the much larger investment needed to scale up, backing the proven use cases that have demonstrated the largest impact. And you really can’t try to do them all. Not all domains are created equal, so you have to think about your company’s strategy. If you’re a company whose value creation is rooted in R&D, then applying AI to R&D workflows is likely to be one of your highest, if not the highest, ROI levers. So you need to start thinking about AI investment the way you think about R&D investment overall.

Ravi Rajamani: Let me talk about my own company. Our approach to AI adoption is not just about deployment. As I said, this is not just about checkboxes. It is about measurable impact. You need to have a value framework that strategically measures the impact that AI is bringing through. If you look at a certain workflow or a task that can be time saved through AI applications, then you can save that time for more valuable, high-impact work. One way to look at it is that AI is an accelerator for innovation.

This goes beyond just a simple output and more toward a nuanced understanding of how AI can contribute to the domain that you’re in, such as R&D. What is the strategic fit here? How well do your AI initiatives align with your top-level goals? But also invest strategically to get your workforce excited because they know the domain so well. At Google, we think that finding that balance is really interesting because a lot of these great ideas come from Googlers themselves. They say, “Let’s go and do this,” and suddenly we have a prototype built out, and then we invest more.

Ben Meigs: I love that, Ravi. I totally agree, and Google is such a great cultural example of encouraging bottom-up innovation. There’s a balance, because you need some coordination. But where I’ve seen companies succeeding, it is exactly that—where they encourage everyone to experiment and build. And there are some resources and some common governance practices behind it all. Culture is so important.

Daphne Luchtenberg: Indeed, because it’s not just technology that’s going to move the needle, right? It’s also about thinking through governance, the operating models that are needed to successfully adopt and scale AI, specifically in engineering and R&D. Ravi, let’s come back to you. What do you think are the key success factors?

Ravi Rajamani: Most enterprise customers we talk to say the AI investment is multifaceted because it impacts multiple areas of how they do business today. You want to emphasize exactly what the strategic alignment is. How do I do this responsibly when sometimes they have an existing way of doing it? How does implementing AI impact your business?

From a governance standpoint, it starts with what sort of data governance and platform foundation you have. You have to leverage your data, which could be sitting in multiple data stores across silos. How do you get that into a unified platform for building your AI models, for training, deploying, and managing these things at scale? Then, how do you control the quality? And how do you now operate AI at scale? These are things that need to be anchored on a strong organizational/governance strategy.

Personally, I would not trust AI 100 percent, just like we don’t 100 percent trust human-written code. We need to have a governance for code reviews and for good reasons, right? So I think melding some of these questions is helpful—about where your data stores are, who manages that, what sort of data is piped into your model building, how do we build what we call an eval framework. Because you go from one model version to another, with so many different changes. How do you make sure there are no regressions, that you don’t go back on functionality, or the model starts behaving in a way that you didn’t anticipate?

It’s a burgeoning area of research, but there are lots of best practices that we already work on with customers.

Ben Meigs: I think the other organizational consideration is that this is going to be a fundamental shift in how we work. You need to extend that thinking to your entire product development process. To get the largest benefit from AI, you can’t just automate steps in that legacy process; you need to fundamentally rethink the way that products are concepted, designed, and taken to market end to end. Ravi, I also liked your point about not trusting 100 percent of human-written code. That’s why you have code reviews. In hardware engineering, you have stage gates. The stage gates that worked for human engineers 20, 30, 40 years ago will need to change, and probably significantly, with AI.

There are many steps that are going to be eliminated and many others that can now be done in parallel, instead of in a waterfall approach. On the other hand, new checks are needed. As an example, we work with a turbine manufacturer that took 11 months out of an 18-month process by using AI for design optimization. This company was able to remove a bunch of stage gate steps, but they also added AI model validation as stage gate criteria before starting that optimization process, to offset the more limited human role. If you’re not rethinking end to end, you’re going to miss out on a lot of the benefits that AI can bring.

Daphne Luchtenberg: Got it. AI is changing every aspect of life and work, but there still seems to be such a lot of organizational inertia to overcome. How should we be thinking about incentivizing and motivating teams to drive the adoption of AI? And how can we drive the mindset shift and successfully scale it? Ben, what do you think?

Ben Meigs: It’s such an important point. We work with companies all the time on these big, difficult changes for their employees, and this is the biggest of all. Thinking specifically about the engineering domain, companies need to understand how deploying AI is going to impact an employee’s experience—the good and bad.

Most engineers got into the job because they love solving hard problems, and they love building products. AI can take away a lot of the administrative work and make the job more creative and exciting. But it can be a bit scary, too. Every day in the news, there’s talk about job elimination because of AI. Companies need to harness that narrative and start talking about the successes, where people built use cases that freed up their time or achieved a better result in terms of the product or customer success. More broadly, you need a deliberate strategy for change management. This needs to include things like: Are your leaders role modeling it? Do your leaders use AI? Do they talk about how they use AI?

Then there’s upskilling for people, to get them comfortable with using AI—not just for the early adopters—and reinforcing using AI with formal mechanisms. This is your stage gate process. And it’s also things like performance reviews—we’ve heard of some companies making using AI more substantial criteria in how they’re evaluating employees. You need this holistic approach. It can’t just be one mechanism.

Daphne Luchtenberg: And Ravi, how is that manifested at Google?

Ravi Rajamani: I totally agree with what Ben was saying. I think it starts with the leadership team, the executive team. They must foster a culture of experimentation and agility. You have to encourage iterative improvements. At Google, if you look at our consumer business, we constantly keep adding to it because you fail fast, you process that feedback, and you’re able to respond to your customers’ needs much, much quicker. So that piece of innovation and environment of psychological safety has to come from your leadership team. Fundamentally, you cannot view AI as a short-term trend. This is a big technological shift that drives how you innovate, how you grow, and how you operate in this new world. Try to have a holistic view. AI cannot be done in a silo is my point. It has to start with a cross-functional alignment.

And you do have to recognize that there are teams and individuals who will resist by blocking it or simply not working on or with it, because they aren’t skilled in it. You need the culture and the safety to say “Listen, you’re going to learn and we’re going to skill you in new areas and new technologies,” because these people bring much-needed domain expertise.

Daphne Luchtenberg: Absolutely. So we’re talking about how companies can maximize the use of AI in product development while they’re still navigating all these challenges. And they can’t afford to have too much disruption to critical processes, right? Because the day job needs to continue, and the products need to continue to fly out the door. So how do people navigate this now?

Ravi Rajamani: I think you want to start small. Pick a low-hanging-fruit use case—maybe automating a simple, repetitive process, and see how you can roll that out, get some quick wins, embrace the process and what the learnings are.

Pace is what is important here. You’re not going to go into the six-, 12-month-long sort of evaluation cycle about whether to do it or not. Get started quickly and enable your teams to be more dedicated. Learn quickly so that you can pivot or recalibrate as needed. We should challenge the traditional way of thinking. That’s where the rubber meets the road. I’m sure Ben has a lot to say about this.

Ben Meigs: I think, with the current AI capabilities and what’s already been proved, we’re at the beginning of an exponential growth function. Ravi, you said before, this is not a trend. It’s not a moment in time. AI is the future. And for companies facing a choice of do you put a meaningful effort and investment into AI or do you wait and see, I think companies that wait and see for another 18 months are never going to be able to catch up with the companies that are already on that exponential growth curve.

So there’s an existential choice for a lot of companies around how you go into AI in a way that respects everything we’ve talked about in terms of bringing your employees along, doing it safely, and not getting lost in all of the complex choices out there. That’s what leaders need to reckon with, and what they need to have a strategy for.

Daphne Luchtenberg: Well said. Thank you both so much for sharing these insights with us. It seems that for companies that rely on R&D to deliver value for their organization, AI is already becoming a massive differentiator. Adopting it effectively and scaling it safely will be how some companies leap ahead of others. Winning the hearts and minds of human users will be fundamental. And from a governance perspective, companies need to build muscle to evaluate, integrate, and deploy models and other AI-based tools quickly into their workflows. Thank you both for being with us today.

Explore a career with us