If we assume artificial intelligence (AI) will be a game changer for almost all industries, how will companies turn that potential into returns? In the pharmaceutical sector, simply bolting AI onto business as usual likely won’t deliver tangible results, industry leaders said on McKinsey’s Eureka! podcast.
Investment in AI is increasing at a fast clip. Companies invested more than $250 billion in it last year. In the pharma industry alone, the AI market is projected to grow from more than $4 billion this year to a whopping $25.7 billion by 2030. Amid this surge, medicine makers have yet to see substantially shorter development timelines or improvements in preclinical or clinical success rates.
What will drive success? Pharma leaders told us AI deployment in their industry isn’t just about adding the technology to accelerate existing processes. The effort will require a complete reimagining of drug discovery and development workflows. But leaping headfirst into AI—the classic “move fast and break things” mindset of start-up culture—won’t work in a highly regulated industry that touches patient lives.
On McKinsey’s Eureka! podcast, these experts offered their perspectives on how AI could transform research and drug development:
- Ashita Batavia, head of hematology and oncology data sciences, R&D, at Johnson & Johnson Innovative Medicine
- Kim Branson, senior vice president and global head of AI and machine learning at GSK
- Lykke Hinsch Gylvin, chief medical officer and head of global medicine at Boehringer Ingelheim
- Howard Jacob, vice president of genomics research and head of data integration at AbbVie
- John Marioni, senior vice president and head of computational sciences, Genentech Research and Early Development
Each podcast guest described what makes AI rollouts effective—insights that align with McKinsey’s principles for implementing AI successfully. They center on culture, technology, and talent.
The six enablers of successful AI deployment
1. A successful AI rollout is the result of clear goals and a rethink of the process; it’s more than just tech—it’s part of a strategy focused on business value
You can’t drop an AI model into an existing workflow and expect transformation to happen. Companies need to take a big-picture view and think through how AI fits into their broader strategy. Key questions include: What problem are we solving? What’s the blueprint for capturing value? How do we define and measure return on investment?
AbbVie’s Howard Jacob noted that success with AI comes from rethinking processes and building new capabilities, not layering tools onto legacy systems. “Inside AbbVie, we have begun to look at everything within the past year—early target discovery, target development, biologics design, and patient recruitment and engagement,” Jacob said. “The uptake is just extraordinary, and I’m glad we put the infrastructure in place so we could start running fast.”
2. Analytics and AI models can turn data into insights that inform decision-making and accelerate R&D
Use-case-related algorithms and models help companies transform data sets into insights that advance business and scientific objectives. These include drug discovery and development and clinical trial design and data.
For example, Ashita Batavia at Johnson & Johnson Innovative Medicine (J&J) said the company is using algorithms that review diagnostic test images to find patients that might be a good match for new treatments. The company also uses deep-learning algorithms to prescreen tissue samples from bladder cancer patients to help determine if they might qualify for clinical trials.
“[To a] digital pathology image we could apply an algorithm that we built and validated to predict the presence or absence of a qualifying mutation for a trial, which pathologists are unable to do,” said Batavia. “With the algorithm . . . we were able to prescreen and accelerate the time to an answer or a screening decision for some patients.”
Boehringer Ingelheim is applying AI to rethink trial design. Lykke Hinsch Gylvin, its chief medical officer and head of global medicine, pointed to the company’s use of digital twins—virtual patient models that help simulate outcomes and reduce reliance on placebo groups.
“These tools help us to accelerate speed while minimizing the number of patients on the placebo control arms, which is a win–win for us and the industry,” she said. These models speed up trials and reduce patient burden without sacrificing data quality.
3. A robust tech stack is essential to provide computing power, data infrastructure, and tools for model development
As outlined in our report published early this year by Jeffrey Lewis, Joachim Bleys, and Ralf Raschke, a modern tech stack allows for insights, workflows, data collection, storage, transfer, and processing of data throughout the discovery, research, and clinical-development stages. It’s made up of four layers: infrastructure, data, application, and analytics.
A well-integrated stack doesn’t just enable AI—it determines whether those tools can scale. Pharma organizations that rely on siloed systems and point-to-point integrations often struggle to move beyond isolated pilots. By contrast, a modular setup with well-organized data and systems that work together can support the deployment of AI tools across discovery and development. This structure helps teams use AI more effectively—making it easier to access clean data, automate research and trial operations, and apply models to tasks like identifying eligible patients for trials, designing molecules, or preparing regulatory submissions. The tech stack therefore becomes the foundation for moving from experimentation to enterprise-wide AI adoption.
4. The right data is critical
Problem-specific data is an essential part of successful AI deployments. The nature of that data is also changing, and it’s now being generated specifically to feed and improve algorithms. For example, when designing a clinical trial, electronic health records can help identify patients who aren’t responding well to treatments. During drug development, practitioners need to match the data to the problem they’re trying to solve and build a strategy around it.
“The big thing for us was generating data with the explicit purpose of building models, because we believe that’s a source of advantage,” said Kim Branson of GSK, which invested in a robust global team called Onyx to carry out data engineering at scale. The team’s mandate is to ensure scientists have the right data and insights.
Data quality is also important, as poor quality inputs can result in unreliable models, ineffective R&D efforts, and regulatory risks.
For AI systems to effectively analyze data, it needs to be organized, which can be a challenge for large organizations that use legacy infrastructure.
5. AI rollouts require multiple talent streams
Companies must adopt a flexible and horizontal approach to talent management—through upskilling and reskilling—and maintain a team with a range of skills. GSK’s Branson emphasizes the importance of cross-functional teams as a foundation for successful AI initiatives. “Domain knowledge absolutely matters for assessing how the data is generated,” he said. “We have people that are both machine learning [specialists] and deep experts in some of these domains.” In clinical imaging, GSK assembled a team of people who have PhD and postdoctoral qualifications in clinical imaging, alongside those with machine learning expertise, he added.
J&J’s Batavia emphasizes the importance of what she calls “trilingualism” in skills proficiency, including in data science, science and medicine, and business strategy. “It’s rare to find a person who has, say, practiced medicine, has been a trialist who understands data sciences, and has a business strategy lens,” she said. “But if a candidate has a couple of those skills, they can upskill and learn the rest.”
Organizations also need to be open to collaboration when building in-house is not practical. “You need all those different elements—the technical expertise; the biological, chemical, and clinical insight; and the compute infrastructure—and you need to be able to incorporate them together sensibly,” Genentech’s Marioni said. “One company is unlikely to have all those elements in-house, so we collaborate closely on the compute side with Amazon Web Services and NVIDIA to accelerate how quickly we can train and deploy the models we’re coming up with together.”
6. A flexible change-management approach helps meet the evolving needs of both business and scientific stakeholders
AI is set to transform how organizations create and manage processes. A rigid, linear product development model is being replaced by an iterative cycle of prototyping, feedback, and continuous improvement. AI tools can accelerate product prototyping, allowing for faster testing and refinement. In addition, AI agents can autonomously lead or own parts of a process, raising new governance questions. For example, should organizations centrally control AI agents or allow teams to decide how independently they operate? For AI adoption to succeed, companies need to identify change champions across the organization, with clear KPIs.
At Genentech, Marioni’s team has embedded AI directly into experimental workflows through a “lab-in-the-loop” model. Rather than using AI as a passive analysis tool, Genentech uses it to actively guide experiments—particularly in molecule design.
“We start with the model, receive a prediction, validate it, and then improve the model,” Marioni said. “It’s a virtuous circle . . . you keep doing that until the model can generate good predictions that can complement and guide the next experiments that are being done.”
While overcoming technological obstacles is always important, addressing the people element—the cultural issues—is often a critical enabler for success. Most of the challenges related to AI rollouts are people related, he argues.
“Don’t underestimate that holistic approach—80 percent of the challenge related to many of these efforts is the people part, so getting that to work makes an enormous difference,” Marioni said. “If we do that right, the rest follows.”
Boehringer Ingelheim is also reshaping how teams work to support AI integration at scale. The company introduced a set of “new behaviors” designed to encourage entrepreneurial thinking across teams. These include collaborating with purpose, delivering to win, and innovating—behaviors that are not only encouraged but actively rewarded, said Hinsch Gylvin.
Facing the risks in pharma
As organizations train AI models on patient data, they face a range of risks—including privacy concerns, data misuse, and regulatory compliance challenges. According to our The state of AI report from March, 47 percent of organizations using generative AI experienced one negative consequence, with cybersecurity being a top concern.
Working with patient data means companies need to take a layered, deliberate approach to protection. As we noted in our November report Harnessing AI to reshape consumer experiences in healthcare, companies need to map out risks and develop mitigation plans. They should develop governance processes anchored in algorithm transparency and continually monitor AI-specific regulations.
Ultimately, the effectiveness of AI outputs depends on the quality of the input data. The data must also be representative of the domain it aims to model. Skewed data sets—where certain segments are disproportionately represented—pose a significant risk, alongside issues like longitudinal data gaps and AI model hallucinations.
Firms should also ensure that AI models are explainable. As J&J’s Batavia noted, explainability “doesn’t get as much airtime as it should . . . knowing what the data is, what the AI application is doing, why it’s doing it, and how we’re able to use it.”
The road ahead
AI investments in pharma will soon face heightened pressure for returns, and the race will ultimately result in winners and losers:
- The redesigners fundamentally rework their operating models, embracing automation, computer-generated simulated trials, and AI embedded across the value chain.
- The tinkerers dabble in discrete, one-off efforts, rolling out isolated pilots that may fail to scale. As a result, they may find that their investments yield low returns, harming their long-term competitive advantage.
The winners will be the firms—the redesigners—that weave AI into all of their workflows, not the tinkerers who layer it on top of existing processes.
Successful AI deployment will depend on how strongly company leadership embraces the technology to fundamentally shift how organizations work. Hinsch Gylvin of Boehringer Ingelheim pointed out that the key is having a clear strategy and goal for where, how, and when you apply AI and digital tools.
We know AI will transform drug discovery and development and could make real differences in patients’ lives. Experts told us success using AI depends on a focused approach and a redesign of processes to support both iteration and a rethink of the way we work. In short, redesigning and experimenting are essential to innovating with AI. Companies that do both well will be positioned to come out ahead.
Podcast guests also highlighted the importance of creating an adaptive culture, with the goal to gradually build confidence across organizations.
“Building trust is the key,” said AbbVie’s Howard Jacob. “We have to elevate the conversation to focus on what AI and [machine learning] outputs do well and how we can leverage them to continue to get better. I saw the same thing when genomics was first coming out. People felt it was a waste of money. We don’t have those conversations anymore.”
