Why governments need an AI strategy: A conversation with the WEF’s head of AI

| Podcast

In this episode of the McKinsey on AI podcast miniseries, McKinsey’s David DeLallo and Kay Firth-Butterfield, the head of AI and machine learning at the World Economic Forum’s (WEF) Center for the Fourth Industrial Revolution, discuss how individual governments are strategizing on how to best use AI to benefit their citizens.

Podcast transcript

David DeLallo: With its widespread implications for society, artificial intelligence (AI) is becoming an increasingly important item on the policy agendas of governments around the world. In fact, a number of governments have wisely gone so far as to draft national AI strategies. What are these strategies aiming to achieve? And how will they enable AI to benefit citizens as well as protect them from potential unintended consequences?

I’m David DeLallo with McKinsey Publishing. Welcome to this edition of our podcast series, in which you’ll get some insights from Kay Firth-Butterfield on how governments are beginning to think about AI. Kay is the head of artificial intelligence and machine learning at the World Economic Forum’s Center for the Fourth Industrial Revolution. What does one do in such a role, you may ask? I had the same question, and here’s how Kay explained her role.

Kay Firth-Butterfield: The work being done out of the four centers for the Fourth Industrial Revolution in San Francisco, Beijing, Mumbai, and Tokyo is around governance and policy for artificial intelligence. When I say governance, I don’t mean regulation. I mean looking for agile ways in which to help the technology benefit humanity and the planet, while also making sure that we mitigate the negativities we’re seeing, particularly from AI. I work in the AI space, but my colleagues work in blockchain, drones, precision medicine, and other emerging technologies.

David DeLallo: I wondered what this type of governance looks like. So Kay shared an example of a project she has worked on with the UK government to help it create guidelines for the procurement of AI technologies.

Kay Firth-Butterfield: As you may know, government procurement around the world is worth $9.5 trillion each year. So if you can plan to procure artificial intelligence product for your government, then you can begin to kick-start the AI economy in your country.

The work that we’re doing with the United Kingdom started when they sent a fellow to work with me in San Francisco. Since then, we have been co-creating ten high-level principles for the UK government’s procurement of artificial intelligence product. Those were agreed on, and now we are drilling down and creating a workbook, so that the procurement officials actually know how to apply them.

What we’re creating is not regulation, which would take a long time to go through the parliamentary process. We’re creating iterative, agile governance around a technology that is in itself changing almost as frequently as we think about it.

David DeLallo: Initiatives like these are useful to enable governments to begin taking advantage of AI. But Kay went on to explain that it’s important for governments to make them part of a comprehensive AI strategy. To date, only 28 governments out of 195 have drafted such strategies. Kay offered some advice to the others on how to get started.

Kay Firth-Butterfield: First of all, think about what the problem is that you actually need to solve. For example, in Denmark, because there aren’t many young people, they actually need to use AI to automate some of the jobs so that their population is benefited by AI.

The same would be true in Japan. If you look at the work that Japan’s been doing, they’ve been really thinking about data policy and eldercare. How can they grow their robotics-cum-AI industry so that they can keep more people in their homes, so they can keep more people mobile longer, perhaps by autonomous vehicles—because they don’t have enough young people to actually care for the older people?

If you look at India’s national AI strategy, they wanted to concentrate their efforts in stimulating the AI economy in three verticals: healthcare, agriculture, and education. But they also needed to think about the fact that India is made up of many small and medium-size enterprises. How do they make sure that these businesses, too, can benefit from the AI economy? So one of the projects that the Indian government is doing with the World Economic Forum is creating a democratized database for AI so that more people can actually have access to the data they need in order to create applications in AI.

If you move to states in the developing world, you’ve got different issues. Across Africa, you’ve got a very large group of young people. So, when you’ve got a big labor market, where are you going to use AI to enhance the workforce? That’s a completely different issue. So it very much depends on what you need to use AI for.

David DeLallo: Kay noted that it’s important for governments to think not only about how to use AI to help their citizens but also about how to ensure it doesn’t harm them.

Kay Firth-Butterfield: The thing that probably keeps me up at night is that we aren’t moving quickly enough. The AI product is growing really quickly, and governments don’t really have policies in place that truly protect citizens. We need to rush in that direction.

I’ll give you an example. One of the projects that we’re working on with UNICEF is around protecting our kids. You may have seen that there are a lot of AI-enabled toys out there that claim to educate children. Well, at the moment, we don’t know who has created the curriculum that is embedded in these toys. So we don’t know what they’re being educated about and how they’re being educated. We don’t know how much of their data is being collected and stored. Are we at a point where somebody can monetize our children’s data from cradle to when they’re 18? In which case they won’t then have to apply for college, because somebody will just be able to buy all their data.

We haven’t thought through the fact that if, for example, a child is playing with a doll and the doll says, “I’m cold,” and the child says to the parent, “My doll needs a jacket,” is that advertising to the child, or is it not?

We are already doing a project with France around facial-recognition technology and the intersection with civil liberties. We know that facial-recognition technology is really important for catching criminals and terrorists and spotting human trafficking and things like that. But we also need to work through how the technology could also interdict our civil liberties.

David DeLallo: While issues like these are cause for concern and attention from governments, Kay believes the promise of AI to help people around the world makes it a worthwhile pursuit.

Kay Firth-Butterfield: The thing that excites me the most is that we may be able to help people who are suffering—something as basic as using drones to deliver blood to women who are dying in childbirth in Rwanda, something that my colleagues who work on drones at the Fourth Industrial Revolution were able to do.

And that’s without AI. Once you start adding AI, then we’re going to see much better solutions for people who are living in poverty or whose situations are poor through no fault of their own.

David DeLallo: And on that positive note, we’ve come to the end of our episode. Thanks again to Kay Firth-Butterfield for sparing the time to share with me her perspectives on the intersection of AI and government. And thank you, listeners, for joining us today. If you enjoyed this episode, you’ll definitely want to check out more McKinsey podcasts here and on other McKinsey channels. Good-bye for now.

Explore a career with us