AI deployment is accelerating worldwide. Yet in many countries, regions, and sectors (particularly highly regulated ones), large-scale adoption is sluggish. This isn’t because the technology is immature; rather, it’s because leaders are concerned about their ability to build and run AI systems with some level of independence from foreign technology providers with respect to data, technology infrastructure, operations, and legal structures. For countries, the concerns might be at the national-security and domestic-economic level, while organizations might be more focused on privacy, intellectual property, and geopolitical issues.
Sovereign AI helps governments and organizations address issues concerning control, governance, and dependence, as distinguished from sovereign cloud, which is mostly about where data is stored. For Europe and parts of the Middle East, these questions are increasingly strategic, and there are several ways to approach them.
In this video Explainer, McKinsey’s Ali Ustun, Luca Bennici, and Melanie Krawina discuss what sovereign AI really means, why it is becoming a strategic priority, the risks leaders must manage, and how to implement it pragmatically without sacrificing innovation.
This interview has been edited for length and clarity.
What is sovereign AI—and how is it different from data or digital sovereignty?
Ali Ustun: Sovereign AI is either a country’s or an organization’s capacity to independently develop, deploy, and govern artificial intelligence using its own infrastructure, its own data, its own models, and its own talent. It is not about owning the technology. It’s about retaining full control over the entire AI life cycle—from the physical compute to the algorithmic logic.
Melanie Krawina: At its core, sovereign AI is about who controls intelligence and not just the hardware, the infrastructure, and the data below the AI applications. Data sovereignty, in comparison, really focuses on the data sets—where the data is stored, where it is processed, and which legal jurisdiction it is in.
Luca Bennici: We define sovereign AI as the ability of a country or an organization to build, run, and govern AI in a way that aligns with its own set of rules, security needs, and values. It comprises several dimensions: a territorial aspect—where the data and compute sit; an operational dimension—who can operate and switch these systems on and off; a technological and IP [intellectual property] ownership dimension; and finally, a legal dimension—whose jurisdiction applies.
Ali Ustun: The difference between data sovereignty and sovereign AI is you can actually have data sovereignty, but you may not have sovereign AI. Sovereign AI is actually the intelligence layer that you build on top of your data. That distinction is something we see a lot of people getting confused about.
Melanie Krawina: It’s not a binary thing. It’s more of a spectrum that we talk about, where we really need to assess what kind of sovereignty level we need for a specific context.
Why is sovereign AI becoming a strategic priority now?
Melanie Krawina: Sovereign AI is becoming a strategic priority because in regions like Europe, it’s directly linked to economic growth, productivity, and competitiveness. We have seen that by building sovereign AI solutions and hosters, we can unlock roughly €480 billion in annual GDP impact by the end of the decade. And if we don’t do that, we’ll miss out.
Ali Ustun: Sovereign AI is becoming a strategic priority for three main reasons. Number one is what we call the liability squeeze. Courts are increasingly holding deployers liable for failures like bias or hallucinations, while vendors are aggressively trying to cap their liability. Sovereign AI allows organizations to build a liability firewall, ensuring that audit and control mechanisms are in place.
The second reason is geopolitical resilience. We are seeing a trend not to be dependent on a few global providers that create vulnerability around what we call “kill switches”: service denials or geopolitical conflicts.
And the third is economic impact. If a country relies entirely on foreign API-based models, the economic value of that data flow is completely external. Sovereign AI helps ensure that the GDP impact you create stays within your control as much as possible.
Luca Bennici: Sovereign AI is becoming more of a strategic imperative than just a niche topic. It’s a strategic reliance and autonomy consideration, but also an economic one. We estimate that roughly 40 percent of the value of AI is underpinned by sovereign or sovereign-enough solutions.
Melanie Krawina: In regulated industries like healthcare, banking, defense, or the public sector, we don’t see broad AI adoption, because sovereign offerings aren’t yet available. That’s actually a hindrance in adopting AI at scale.
Why does sovereign AI matter more in some regions than others?
Ali Ustun: Sovereignty is actually pretty much a non-US situation these days. In the US, everything is sovereign because you operate with most of the public hyperscalers. The capacity is there. The applications are in your domain. The same applies in the China ecosystem.
But if you are more dependent on external ecosystems and other providers, then sovereignty starts becoming something you need to address.
Melanie Krawina: I think with Europe specifically, GDP growth is stagnating and labor productivity is sluggish. We really need a new booster, and AI can be that booster.
The lack of a sovereign AI offering in Europe is clearly limiting adoption. A lot of European companies and CEOs have told us they are not adopting AI at scale at this point, because there is presently no sovereign offering.
In terms of adoption, France is one of the most interesting examples right now. The French government is saying, “If we really want to be an AI nation, our public sector will start.” They are switching cloud providers. They are switching software providers so that core systems are built with European players. At the same time, they are deploying AI use cases in the public sector to improve service delivery. It’s a national strategy to walk the talk.
Who benefits from sovereign AI?
Luca Bennici: The value of sovereign AI spreads across several sectors in an economy. Governments and societies benefit from strategic resilience and independence, as well as GDP creation. It also better fits local norms, culture, and legal frameworks.
Enterprises benefit because it reduces vendor lock-in, which can lead to higher costs and limited interoperability. And providers and investors benefit because they are jumping on a fast-growing market segment where there is a lot of value at stake.
Melanie Krawina: Sovereign AI creates value across a broad set of stakeholders. For Europe alone, we have estimated a €480 billion GDP uplift at stake. Regulated industries would be the first to profit because they could start deploying AI use cases at scale if they have the right setup.
And finally, the providers themselves benefit. I would love to see us grow hyperscalers in-house in Europe and really create new jobs and be at the forefront of tech and AI sovereignty again.
What are the risks leaders need to manage?
Ali Ustun: We need to think about risks and trade-offs. One is latency. When you implement strict sovereign guardrails—input-output filtering, PII [personally identifiable information] reduction—you introduce latency. That can degrade user experience and reduce competitiveness.
The second is cost and waste. Sovereign clouds often carry a price premium. Ask yourself why you really need sovereignty; do you really need it for every single thing? Otherwise, you’re just going to be replicating a significant infrastructure stack that isn’t really needed.
The third is innovation risk. If you create strictly sovereign environments, you may lag behind the bleeding edge of global innovation. And finally, energy intensity. These things are resource intensive and must account for energy requirements.
Luca Bennici: One of the biggest risks is mis-sequencing—building ahead of demand, whether infrastructure, GPUs, or models, more than talent and governance can support. There is also a cost element; often, sovereign solutions may be perceived as more costly. Third, the complex and fragmented ecosystem of partners may make implementation more difficult. And finally, there’s an organizational bottleneck that can slow down workload migration.
Melanie Krawina: The biggest risks right now are not ideological but execution risks. Even though many CXOs have thought about sovereign AI, not many have moved to sovereign solutions yet.
At this point, it’s all about workload sequencing: thinking carefully about the first few use cases to host on sovereign AI solutions, seeing success, and then scaling. Sovereignty is no longer about “Shall we do it?” but rather how to implement it in a sustainable way.
What are the risks if countries or organizations don’t pursue sovereign AI?
Luca Bennici: Governments also face risks from not implementing a sovereign AI strategy. The number one risk is dependence. They would be dependent on foreign tech, which can influence future road maps or the ability to develop certain applications.
Second, there is leakage. Governments import more technology and export less locally built tech. This gap results in a lower economic contribution to the national agenda.
Then there’s the regulatory aspect. There could be case-by-case exemptions instead of standard certified environments and regulated adoption.
And finally, they’re starting from a weaker point for negotiation. They have less leverage with hyperscalers or model providers and decreased power to insist on AI systems that reflect local realities.
Melanie Krawina: From a national or regional perspective, the main risk is that the control and the value happen somewhere else—that [Europeans] become just the receivers and the takers, and are not at the forefront of really shaping the most fundamental questions we have in the age of AI: the ethics, the guardrails, where to use it, and where not to use it.

Looking for direct answers to other complex questions?
From a corporate perspective, the biggest risk is that AI just doesn’t happen. In strongly regulated industries, many are still not deploying AI use cases at scale because of concerns about sovereignty. As a consequence, labor productivity is stagnating or going down and a lot of the value and control is happening somewhere else.
Ali Ustun: You have data sovereignty, but you may not have sovereign AI. If you rely on a foreign model to process your data, then it is not a true sovereign AI.
What are the essential building blocks of sovereign AI?
Melanie Krawina: There’s no official definition yet of what sovereign AI is and what the building blocks are. In our research, we tried to simplify this and defined a full AI stack across seven layers, starting with foundational layers like energy and connectivity. Then you have the data centers, the cloud, up to the AI applications themselves.
When you say start with sovereign AI, I don’t think this will be an easy lift-and-shift model. CXOs need to start with a very pragmatic approach. They need to think about the first workloads that require a higher level of sovereignty than what we have today and take those use cases and start building them in a new sovereign stack.
Luca Bennici: Sovereign AI is really an ecosystem that needs to come together. You need a clear sovereign AI baseline and architecture blueprint. You need certifications, controls, monitoring, and incident response mechanisms. You need to think about data ecosystems because, ultimately, intelligence is created out of data.
And then you need a pragmatic, modular set of building blocks so you can use global frontier models when appropriate—but also own or fine-tune domain and language models where sovereignty and value matter more than most. And you need the right talent and operating model in place.
Ali Ustun: Leaders should recognize that not all AI workloads actually require strict sovereignty. You should segment your portfolio—use public models for generic tasks and reserve premium sovereign infrastructure for high-value IP and sensitive data. Otherwise, you’re just boiling the ocean.
How should leaders get started—practically?
Ali Ustun: This is something you need to partner up for. You can use a mix of neoclouds, local providers, and global hyperscalers. The ultimate goal is interoperability, not isolation.
If you think about it as a two by two—which data requirements require real sovereignty versus public, and which partner options you use—that mapping helps define strategy.
Luca Bennici: A practical implementation road map is a phased approach. You start by establishing the baseline and unlocking early demand.
For instance, a telecom operator in the Middle East started by adopting a sovereign cloud platform for its internal needs to run the workload of its own network infrastructure.
Once they became more confident working with that platform, they expanded the set of functionalities and users to their enterprise customers. That phased approach has proven to be more successful than going for a disruptive approach from the beginning.
Melanie Krawina: There are use cases that companies already know would create value. For example, a car manufacturer in Europe has decades of proprietary production data. Right now, they do not deploy AI algorithms there because they don’t want to provide that corpus of proprietary data to third-party providers. Of course, this is not the best tech setup. What I would tell these leaders is that there are clear use cases that would be great on a conceptual level. Start with a small pilot—try replicating a sovereign AI stack with local players, for example—then see if there’s impact and scale it from there.

