On January 13, during the JPM Healthcare conference in San Francisco, Delphine Zurkiya, a senior partner at McKinsey, hosted a leadership breakfast to explore the convergence of life sciences and artificial intelligence. The conversation featured Sweta Maniar, Google Cloud’s global strategy and market leader for life sciences, and Thomas Fuchs, the chief AI officer at Eli Lilly.
The dialogue centered on a fundamental shift in the competitive landscape: organizations are no longer merely benchmarking against their traditional peers; they are now measuring their digital maturity against technology leaders in Silicon Valley.
We have two themes that have collided: the shocks happening in our industry and everything we now hear about what AI can do. Resilience comes from connecting those two things.
This shift is now a permanent fixture of boardroom discussions, where digital capability is no longer seen as a back-office expense. Instead, it is understood as a vital engine for growth and a way to build a lasting competitive advantage.
From saltwater to strategic assets
A central theme of the dialogue was the evolving nature of the industry’s data. In 2025 alone, healthcare and life sciences generated approximately 11,000 exabytes of data worldwide. Yet, a staggering 97 percent of that remains untapped. One participant noted that the “data is the new oil” analogy is fundamentally flawed. Instead, raw data is more like saltwater: abundant and surrounding us, yet impractical and unusable in its natural state.
AI tools can function as a desalination plant, making this data much more useful and more powerful moving forward.
The challenge is conversion. AI functions as the “desalination plant” of the modern enterprise, transforming vast reserves of previously unused sensor data, lab history, and unstructured records into information that can support real-time decisions.
This process is particularly transformative when applied to failure data. For example, some industry leaders are now leveraging millions of molecules known not to work—data that almost never reaches published literature. By training models on this hidden evidence that few others possess, organizations create a highly specialized data advantage that is nearly impossible for competitors to replicate.

The rise of the agentic enterprise
While generative AI changed who could participate by allowing non-technical teams to interact with data, the most significant breakthroughs are still ahead. The discussion pointed toward a shift toward agentic systems. Unlike standard models that require constant human prompting, agentic systems take autonomous action, operating continuously to move work forward while keeping people in the loop for high-level judgment.
The potential for impact is significant:
- Automating complex workflows: At least 75 percent of enterprise workflows—including complex R&D processes—have components that can be partially handled by AI agents.
- Expanding capacity: These systems handle the time-consuming tasks that humans struggle to prioritize, operating 24/7. Current estimates suggest this could free up 25 to 40 percent of the time currently spent on routine tasks across the enterprise.
- Capturing specialized knowledge: Perhaps most importantly, well-designed agents can surface “tacit knowledge”—the logic behind a specific deal or the nuance of an experiment—that currently resides only in the minds of experienced employees.
Real-world wins: beyond the hype
The conversation moved beyond theory to highlight tangible results already being realized in the industry.
In manufacturing, one organization deployed a digital-twin AI solution to optimize the drying of active pharmaceutical ingredients (APIs). By analyzing sensor data, the system reduced the drying cycle from 50 hours to 35. This efficiency gain alone resulted in millions of additional doses for patients every month.
The commercial sector is increasingly resembling modern retail. AI-driven logistics tools are now helping sales forces plan their schedules and prioritize interactions with healthcare providers with greater precision. When teams see that these tools help them prepare more effectively and achieve better personal results, they adopt them naturally.

Culture and the “art of failure”
Successfully implementing these tools requires more than capital; it requires a cultural overhaul. To attract and retain the world’s best scientists, companies must signal that they view high-performance computing as a scientific instrument—no different from a telescope for an astronomer or a microscope for a biologist.
Furthermore, organizations must embrace machine learning as a process of trial and error. While patient-facing work demands the highest standards of proof, the discovery phase is where exploration should be encouraged. Constant iteration is the only way to make progress.
To bridge the adoption gap, some leaders are now incorporating AI-related contributions into annual performance reviews. By asking individuals to identify a specific project they initiated using AI, the organization sends a clear signal: digital engagement is a core part of the job, not an optional extra.
The 18-month horizon
The pace of change has rendered traditional five- or ten-year planning cycles obsolete. In an environment moving this quickly, the largest strategic investments must deliver measurable results within 18 months. Initial signs of success should be visible within the first four to six months to build momentum and organizational trust.
The companies that don’t use AI will be replaced by those that do.
As the session concluded, the warning for the future was clear: companies that do not use AI will eventually be replaced by those that do. The question is no longer whether the technology can perform; it is whether the enterprise is ready to use it.