Liquid AI, a unicorn AI lab in Cambridge, Massachusetts, builds foundation models fundamentally differently. It designs AI from scratch, with a hardware-in-the-loop approach that allows it to deliver the highest speed and lowest latency on any processor, such as graphics processing units, central processing units, and neural processing units. It does not design transformer models but instead builds liquid foundation models (LFMs), a new generation of AI models that Ramin Hasani and fellow cofounders Alexander Amini, Daniela Rus, and Mathias Lechner pioneered at MIT. LFMs represent a new generation of high-performance models that can process text, images, audio, and video simultaneously on any device, such as phones, laptops, wearables, and home appliances, as well as devices in cars or airplanes. Hasani sat down with QuantumBlack, AI by McKinsey, to discuss Liquid AI’s academic roots and business building and its focus on optimizing AI for devices.
This interview has been edited for length and clarity.
The journey from MIT to the marketplace
QuantumBlack: Your company is built on different technology than the large language models [LLMs] powering the generative AI most people are familiar with. Can you give us a quick history of your founding, from research lab into the mainstream marketplace?
Ramin Hasani: When we started our machine learning research about a decade ago, we wanted to draw inspiration from nature and physics on how cells process information, and then we introduced the learnings into the machine learning world—for example, studying animal brains to build new and better algorithms.
At MIT, in Professor Daniela Rus’s Computer Science and Artificial Intelligence Laboratory, we had been focusing on AI for robotics. We developed algorithms that are significantly more compressed than typical AI systems at the time and that perform much better. For instance, driving a car autonomously was possible with a handful of neurons compared with artificial neural networks with millions of parameters.
We realized we could take that brain-inspired technology and apply it to many different domains, going from robotics to predictive markets to healthcare. We also saw that the technology can bring a lot of value and outperform the existing models in a much more efficient and compact way.
We called these flexible forms of intelligence systems that we designed “liquid neural networks”—as in “liquid” for flexibility. These flexible intelligence systems facilitated better decision-making in highly automated tasks, such as autonomous driving.
[That’s when we told ourselves that] we might want to expand the horizon of what we can do with this core technology and build and scale Liquid AI so we can go from predictive AI to generative AI. LFMs became the core building block of our technology, which built on everything we learned from nature, physics, and algorithms from our decade-long research at MIT.
Reducing cost without sacrificing quality
QuantumBlack: How would you explain an LFM to a CEO?
Ramin Hasani: The systematic way we design intelligent algorithms for enterprise applications allows us to provide a lot more control, certainty, and reliability into the deployment of AI systems. LFMs, in contrast to transformer-based models, give you confidence that you have the most cost- and energy-efficient AI stack delivering the highest quality.
LFMs are the optimal choice of generative AI models to serve on a device—outside of data centers, in factories, in a car, and on your phone, laptop, and PC—or inside a data center for ultra-low-latency applications. These models will systematically reduce the cost of intelligence while delivering the same frontier-model quality on specialized applications.
That was the core premise of Liquid AI: efficient and reliable AI for all. We wanted to give enterprises the confidence to know that when they use AI, they are getting the best possible version, one that is coming from a fundamentally new kind of technology.
And this new technology is not just an alternative bet. It’s a first-principles way to look into the possibilities of general-purpose computers and get them to perform tasks with the energy requirements that the task actually entails.
Turning early clients into investors
QuantumBlack: How did you differentiate yourselves to investors who have already seen dozens of AI start-ups?
Ramin Hasani: For us, building a core technology with an A team with complementary skills was the unlock to building Liquid AI. In this space, talent is everything: If you’re building a foundation model company, there aren’t many AI scientists who know how to build and deploy one from scratch with taste. You need experts and innovators.
All four cofounders of Liquid AI are AI scientists with good outreach in academic circles. One of the first things that we did was approach our innovator friends to join us. The first team at Liquid AI was well-known in industry and academic circles, and I think that became a cornerstone of our success because technical talent with credibility is the most important thing when you’re introducing a radically different technology.
After talent comes the business, and I was determined to avoid hallucinating use cases. From day one, we kept extremely close to clients, which included enterprises in sectors such as semiconductors, finance, consumer electronics, automotive, robotics, e-commerce, healthcare, and more. That proved very influential, since our first round of financing came together from our clients and their strategic investors. We approached these companies as partners very early on, and as we developed the technology, they saw the promise of what it could do for their own businesses.
The advantages of on-device AI
QuantumBlack: In a field largely defined by data centers and the need for significant compute power, your models run on tiny devices, from Raspberry Pis to smartphones. What kind of revenue models or customer experiences does on-device AI enable that cloud-based models don’t?
Ramin Hasani: On-device AI is a new market. To unlock this market, we had to innovate on the efficiency of generative AI algorithms to bring them to device-level processors, but we also had to bring the quality and reliability of the models to be comparable to the frontier models in the cloud. There’s a lot of attention being paid to the building of data centers to host the largest and most sophisticated versions of AI. But there is so much more outside of data centers for us to explore.
For instance, with on-device AI, automotive companies can introduce in-car intelligence, allowing you to talk to your car. Why is that important? Because you cannot rely on the cloud to power a critical safety feature inside a car due to security, connectivity, and privacy issues. The intelligence has to be on-device, because if you suddenly experience network interruptions, you’re going to be in trouble.
Another advantage of device-aware AI is building models that are fast decision-makers. Applications in financial services and e-commerce, such as recommendations and high-frequency trading, are extremely latency-critical. In these applications, our models complete tasks with millisecond and microsecond latency. That speed is very challenging to obtain with the larger models when it comes to these latency-critical, privacy-sensitive, and security-critical applications.
Our models provide the quality of frontier LLMs on specialized applications but with LFMs, which are up to 1,000 times smaller. As a by-product of the efficiency and speed of LFMs, the cost of intelligence significantly calibrates.
Why we still need LLMs and big data centers
QuantumBlack: How can we think about allocating AI investment across diverse use cases spanning physical, edge, and cloud?
Ramin Hasani: I believe in a hybrid future. If you’re asking, “Do we really need all of this infrastructure being built for AI?,” my answer is yes.
Why? Because we want to solve the most complex problems in the world with frontier AI. Larger and more elaborate AI systems are extremely useful because they can allow us to discover new science, new math, and new physics and, in general, to better understand the universe around us.
What is the goal of humanity? We want to understand where we are, who we are, and where we are going while enjoying the journey. So for that to happen and for us to extend human life, for example, we need to build more intelligent systems. I’m a techno-optimist and believe AI can help us cure cancer once and for all. But AI for scientific discovery is going to need a lot of energy that we do not have, and we need to get creative on that front.
But there’s this other type of AI that can solve day-to-day problems right on our devices while we strive to build the frontier AI. The on-device AI extends the world beyond data centers. It enables a physical AI world where we have robots, AI glasses, and hyperpersonalized computers performing tasks on our behalf in society in a controlled and private way. We need this kind of intelligence on the edge as well as cloud AI to truly do a planet-scale deployment of AI for good.


