Author Talks: In defense of big data

In this edition of Author Talks, McKinsey Global Publishing’s Raju Narisetti chats with Orly Lobel, a tech policy scholar and distinguished law professor at the University of San Diego, about her new book, The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future (PublicAffairs, October 2022). Instead of trying to curb technological development—which Lobel says is not stopping any time soon—we can steer it toward a more equitable future. An edited version of the conversation follows.

What is the meaning of the book’s title?

I wrote The Equality Machine and called it The Equality Machine to shift our mindset away from all of the bad, scary things that we are hearing about technology.

There is some truth—and sometimes a lot of truth—to the idea that algorithms, artificial intelligence, and digital technology can do harm, but I found that it’s important to have a more nuanced conversation, where we can start thinking about what it would look like if we designed technology to be an equality machine, to be AI for good, to do the work that we need to do to have a fairer and more just society.

Why do you say tech has gone from being seen as generally good to generally evil?

We’ve had a shift over the past two decades. It’s gone from a lot of excitement around new technology and an understanding that it can both have potential and problems, to having flat conversations about automating bias, automating inequality and surveillance, and having less and less privacy.

A lot of this has to do with alarmist misinformation about the direction that technology has been taking. Some of it has been about who has skin in the game in designing our technology systems. Justifiably, there is a concern about the concentration of who has the power to decide and design our digital systems. There’s far too much concentration with big tech.

We as humans have an aversion to things that are less understood, less known. We have a psychological aversion to change. This has come together for a misinformed, flat, alarmist discussion about technology.

I have been involved in antitrust policy with the current administration and internationally. A lot of the fear of tech has to do with behavioral failures that we have. We as humans have an aversion to things that are less understood, less known. We have a psychological aversion to change. This has come together for a misinformed, flat, alarmist discussion about technology.

Given some of the risks of technological progress, how do we move forward?

To move forward, we need to do both: we need to be critical, but we also need to be cautiously constructive and optimistic. If we want to see progress, it’s not enough to just say, “Let’s ban algorithmic decision making. Let’s ban biometric data collection.”

What we need to do is ask questions and be sincere about what our goals are and what the trade-offs are between different choices that we are making as a society. This is the history of all human progress: there are always costs and benefits. There are risks, and we need to be asking what the failures have been, what the harms have been, and what the risks are, while also discovering best practices and discovering the potential.

What we need to do is ask questions and be sincere about what our goals are and what the trade-offs are between different choices that we are making as a society. This is the history of all human progress: there are always costs and benefits.

I wrote The Equality Machine because in my research I find so many things to celebrate, so many great developments, and so many heroes, like computer scientists that are revolutionizing healthcare, health screening, and medical devices. There are also people making sure that we can detect salary gaps in the workforce and every other sector of life.

Part of the motivation of writing The Equality Machine is to show that in every field of life and in every sector that we care about, we have promising developments. So we need to know what the worst practices are, but we also need to have a vision and blueprint of the best practices.

Why isn’t there a consensus on how to police technology?

The policy solutions on the table are quite limited right now. First of all, we are privileging privacy in a way that sometimes can be harmful to the exact communities that we’re trying to advance and protect, and which can have more resources allocated to them. There is definitely a digital divide in the world.

What I show in the book is that we should be equally worried about not collecting data as we can sometimes be about collecting too much data and intruding too much on people’s personal information.

We should absolutely be worried when data is not collected and when we leave some communities in different areas in the world—especially in the developing world—without access to digital data collection. We shouldn’t have these [digital] divides between countries. We should understand that there are winners and losers from overprotection, just as much as there are winners and losers from underprotection of digital collection.

The conversations that we’re having are divisive, and as a consequence, we aren’t looking at constructive solutions that are robust, that involve collective action, and that involve investment in technology and experimentation.

We are privileging privacy in a way that sometimes can be harmful to the exact communities that we’re trying to advance and protect, and which can have more resources allocated to them. There is definitely a digital divide in the world.

There’s a binary right now of either “private industry will take care of everything” or “private industry is the source of all evil,” or “we need to focus on breaking up big tech and creating bans on technologies like facial recognition” or “we need to erect more walls on digital data collection.”

Those are conversations that we need to have, but we aren’t having the conversations about our core democratic values, and we have always had internal tensions within them. We value privacy. We value equality, health, and safety. We value speech, access, education, growth, and innovation.

We need to acknowledge that, in each system and in each kind of the new advancement in technology, there are going to be some trade-offs, there are going to be some difficult decisions to make, and those are the decisions that we have to make as a society.

Those are the kinds of conversations that we have to have: When do we value accuracy at some expense to privacy, for example? We’re not having these conversations because we think that everything is right and left, that it’s ideological.

Why are you optimistic for a future full of equality machines?

Sometimes it’s hard to see the arc of progress when we’re living through history and through a lot of things that are concerning at the moment. There are a lot of wrongs in the world.

Here in the United States, even compared to a decade ago, women have less reproductive rights than they had. Certainly, that’s not a reason for optimism. And yet, I use this term “at the arc of history,” or “the arc of progress,” because it’s important to look at the huge leaps that have been made for equality, for women, for minorities, and for people with disabilities for inclusion and accessibility and through technological advancements.

I set out to research all these wonderful things that are happening in medicine, in biotechnology, in technology, and in the platforms that help tackle hiring gaps, salary gaps, and even the dating markets.

I’m cautiously optimistic. I’m always asking about the risks. It’s only when we have skin in the game and a constructive stance on envisioning the best possible future that we can get a brighter future and an equality machine.

Author Talks

Visit Author Talks to see the full series.

Explore a career with us