Mozilla’s Mark Surman: Let’s make AI as trustworthy as seatbelts

In a tech landscape that foments continual change, Mozilla’s approach has been unwaveringly constant: magnify the public benefits of the internet through fair and open access. That credo, outlined in the company’s Manifesto back in 2007, remains in place today.

Long before he ever joined Mozilla, Mark Surman espoused those same beliefs. As president and executive director of the Mozilla Foundation, Surman is a leading advocate for trustworthy AI, digital privacy, and the open internet, and he has built the foundation into a major philanthropic and advocacy voice.

In his interview with McKinsey’s David DeLallo, Surman outlines what it will take to achieve trustworthy AI and the value that will come to those that lead in this area. An edited version of their conversation follows.

David DeLallo: How are Mozilla Firefox and the Mozilla Foundation related?

Mark Surman: The Mozilla project began about 25 years ago as a bunch of activist, radical hackers who believed in open-source principles and wanted to create an alternative bowser to the then dominant Internet Explorer. Right from the start, our idea was to establish an open internet. After about five years, we realized we needed to enable that more broadly, so we created the foundation to serve as a public asset. We also wanted to set up a company that could play in the market and influence what consumers use, so we established a subsidiary, the Mozilla Corporation, to produce Firefox. This is who we are today. We’re Mozilla, and we have an activist arm and a product arm, and together we think they can help move the internet and AI in a better direction.

David DeLallo: As a longtime advocate of responsible technology use, how would you define trustworthy AI?

Mark Surman: Mozilla was founded on the belief that privacy, security, transparency, and human dignity matter. And although technology has changed, those values have not. Just as we pushed for a trustworthy web in our early days, what matters today is that we have trustworthy AI and data-driven computing that work in the interests of people and that are shared and open. When we first started working on a browser, what mattered was keeping the web open and making sure it worked across every computer. And we think about those same values today.

Trustworthy AI involves two things. The first is AI that promotes human agency and allows users to control what things that have AI inside them do, see how they work, and make informed choices about if and how to use them. The second is accountability. For people who misuse AI or deploy it in sloppy ways, there must be repercussions. Ultimately, that’s what trustworthy AI boils down to for us.

David DeLallo: How do businesses reconcile the potential conflict between providing transparency around AI and ensuring that they safeguard competitive advantage?

Mark Surman: It’s a misnomer—though one deeply baked into our logic about computing, the internet, and digital society today—to think that trust and profit or customer value and transparency are in contrast. Look at the auto industry. Volvo made its name on creating the safest car.

I believe there is a real opportunity to create AI that consumers believe in—because customers are the people you’re making the product for, and they want to trust you. At Mozilla, we want our product to say, “This is technology you can trust. Not only will it not harm you, but it will protect you, and you can control it.” We just need to switch the mental model to that.

David DeLallo: Recommender engines have come under scrutiny for spreading misinformation. How can companies give users the value these AI tools can provide while minimizing the negative impacts?

Mark Surman: AI is in almost everything now, and it does a lot of things that most of us find delightful. For example, I love it when YouTube or Spotify recommends something that I would never have thought of or when my phone guesses what I want to do and provides a prompt. But in the rush to produce user value and get these products to market, some companies haven’t given sufficient attention to the side effects these tools can create. It’s one reason we saw the spread of health misinformation in the pandemic and a host of other problems.

I know that companies genuinely want to fix this problem, but they’re not doing it effectively enough. We’ve done a bunch of research on YouTube that shows that even when user controls are put in place, those controls aren’t effective in helping to filter out content that is problematic or that users don’t want.

There are two considerations to solving these issues. One, there’s a product design consideration around safety and trust. As much as there are good intentions, there’s not enough investment or innovation. It’s analogous to when the auto industry was beginning to consider safety features for automobiles. Then the mindset was, “Well, the seatbelts work 20 percent of the time. We’ll figure it out eventually.” And the response from customers and others was, “No. Figure out how to make the seatbelts work now.” It’s the same with tech. Companies know how to innovate in the ways needed, but they have to invest in those areas and work harder at it.

Where regulation comes in is to be a counterpoint and create an incentive. If the incentive is just to make more ad revenues or get more views, for example, then companies might go slow on safety. Regulation should be a check and balance, pushing for further and faster investment in products that are trustworthy and keep us safe.

David DeLallo: Are there lessons from the conversations that once swirled around open access that we can apply to today’s discussion of AI ethics?

Mark Surman: We’re in similar territory, but the questions around access and openness are not quite the same as those 25 years ago. Back then, I was one of those people who thought the march of capitalism would be enough to get everybody online. And that largely did happen. About 50 to 60 percent of the world’s population is online today. That’s still not enough, but it does mark an enormous shift in the space of two decades. However, the question we now need to be asking is, “On what terms are people online?”

Are poor people more surveilled, exploited, and extracted from than rich people who can afford not only fancier devices but also more privacy and less risk of harm? I think we need to be raising a different set of questions about who benefits and who is hurt by the way our digital world is shaped.

David DeLallo: What advice do you give to businesses for ensuring trustworthy AI?

Mark Surman: I’m Canadian, so to use a Canadian metaphor, you have to look at where the puck is headed. Regulation will create an incentive and open new markets for creating more ethical AI. There will be a market, I believe, for start-ups focused on responsible approaches and on back-end technology that helps other companies build AI more responsibly. Big players such as banks and social platforms likewise can discover that responsible AI pays off. Investing more in “seatbelts” and in building trustworthy AI will help their bottom lines. But also, they just have to do it now. This is part of their jobs.

Explore a career with us