AI foundation models now routinely contain hundreds of billions of parameters, placing unprecedented demands on the chips and systems that support them. Those demands set the context for a panel discussion at CES 2026 in Las Vegas titled “All In on AI: Betting on the Power of Next-Gen Chips,” moderated by McKinsey partner Syed Alam and featuring leaders from EMD Electronics, NVIDIA, Qualcomm, and Synopsys.
Panelists consistently emphasized that AI’s next phase is not constrained by algorithms alone. Instead, progress will depend on compute capacity, memory hierarchies, materials science, manufacturing precision, and the industry’s ability to optimize these elements quickly and reliably.
A new wave of innovation—and some new complications
As models have grown larger and more sophisticated, the amount of computation required to train and deploy them has increased sharply. “Generative AI with large language models, agentic AI, and physical AI—the dramatic progress in capability is largely fueled by the amazing progress in the underlying compute,” said Shankar Krishnamoorthy, chief product development officer at software solutions company Synopsys.
But the typical improvements in chip performance, about 30 percent with each generation, are not enough to train new AI models that must parse ever increasing amounts of text, images, videos, and data from lab simulations—and that gap is prompting semiconductor companies to push their boundaries.
Carmen True, vice president and head of compute marketing at semiconductor designer Qualcomm, noted that customers often approach her company to discuss how they can meet the performance and power requirements for AI applications—for instance, determining what computations can occur within devices and which must go to data centers.
Companies along the entire semiconductor value chain are also under extreme pressure to accelerate development because next-generation AI chips are now expected on an annual cadence. Suresh Rajaraman, executive vice president and head of thin films at EMD Electronics, noted that it has sometimes taken 10 years to move from early development to high-volume manufacturing in the materials industry.
To reduce timelines, EMD Electronics is now using AI for material design, testing, and scaling. “You need a chemist’s intuition to determine whether something will work, but AI accelerates that process. It allows you to stand on the shoulders of giants faster,” Rajaraman said.
In another complication, AI is sometimes forcing manufacturing and materials science into unfamiliar territory. For instance, AI now allows many chips—including most for logic, DRAM, and NAND—to be built as 3D integrated circuits, which involves vertically stacking multiple chips or layers. Rajaraman noted that the level of precision required “is like landing a man on the moon, but at the exact same spot, over and over.”
How companies use AI
Beyond products, panelists described how AI is reshaping their own organizations, as well as helping companies in multiple sectors thrive.
With materials, Rajaraman noted that moving from a laboratory discovery to high-volume manufacturing can take a decade or more—an eternity in an industry advancing on yearly chip cadences. “That time mismatch is unsustainable,” Rajaraman said. “So we’re applying AI across how we design materials, how we test them, and how we scale them.”
Among other improvements, EMD Electronics now applies AI across three stages: materials design, testing, and scaling. For instance, the company uses AI to accelerate molecule discovery, predict how materials will behave in real fabrication environments, and uncover correlations across anonymized manufacturing data without exposing sensitive intellectual property.
Taking a marketing perspective, True said that Qualcomm frequently uses AI extensively for communications, translation, and global content delivery—areas where speed, consistency, and accuracy are critical. For consumer companies, True noted that edge AI is often essential to success, from smartphones and wearables to automotive and healthcare applications. “AI has been in phone cameras for years,” True said, “it just wasn’t visible. Now it’s becoming central to how people interact with technology in everyday life.”
Quantum’s emerging role
Throughout the panel, the relationship between AI and quantum computing was a recurring topic. Sam Stanwick, the product team lead for quantum computing products at NVIDIA, noted that the company is building AI tools to control, calibrate, and correct errors in quantum computers. “Advances in quantum today at every level—from designing chips to designing algorithms—are made with AI,” he said.
Stanwick also pointed out that quantum computing may enhance AI applications, but it will not solve every problem. “Where we do expect it to be useful is some very important problems that are usually centered on simulating nature—so drug development, materials discovery, questions on things such as battery chemistry and how a molecule binds to another molecule. These are inherently quantum mechanical problems,” he noted. “Think of how generating training data based on the quantum mechanical nature of the natural could do for AI when solving problems in biology, chemistry, physics and everywhere where quantum mechanics shows up.”
In the future, Stanwick noted, quantum computing could enhance AI. “They are really symbiotic technologies,” he said.
***
By the end of the panel, one theme was clear: AI’s next phase will be unlocked not by a single breakthrough, but by sustained progress across compute, materials, manufacturing, and system design. Together, these forces will determine how far—and how fast—AI can maximize its potential.