Author Talks: How AI could redefine progress and potential

In this edition of Author Talks, McKinsey Global Publishing’s Yuval Atsmon chats with Zack Kass, former head of Go To Market at Open AI, about his new book, The Next Renaissance: AI and the Expansion of Human Potential (Wiley, January 2026). Examining the parallels between the advent of AI and other renaissances, Kass offers a reframing of the AI debate. He suggests that the future of work is less about job loss and more about learning and adaptation. An edited version of the conversation follows.

What inspired you to use the Renaissance as the book’s metaphor?

We use “renaissance” now as a convenient catchall for massive improvement and progress across art, science, math, and politics in a period that saw tremendous growth in population and in other areas. I chose the Renaissance [as a metaphor] because it’s a period of time that most people can relate to as being foundational in human history.

That’s also the reason we chose “renaissance” as part of the title. But there’s another important comparison: In addition to the growth across multifunction and multidisciplinary fields, the late Middle Ages were, by any measure, pretty bad. Most people during the late Middle Ages would have had good reason to believe the world was getting much worse, not better. The population declined because of disease, and there was a regression in freedom and safety.

What—real or perceived—is driving the adoption gap?

For reference, the adoption gap is the space between the technological threshold, or what technology can do, and the societal threshold: what we want it to do, or what we let it do.

I’ve become fascinated by societal thresholds for many reasons. Generally, I am more interested in the human element, but academically, I also find it fascinating that in ten years, the question “What can a machine do?” will become far less interesting.

There are reasons for the adoption gap. Even in the case of robotics, where machines can wash dishes and will soon be in every home, I’m not sure it’s “right.” There is an incredible disparity between what a machine is capable of and what we choose to have it do.

Another reason for the adoption gap is that we have exceptional tolerance for human failure, and we have none for machine failure. That actually plays out very well in many cases.

The adoption gap is the space between the technological threshold, or what technology can do, and the societal threshold: what we want it to do, or what we let it do.

It’s the reason that [we assume] my building will never fall over or why planes don’t fall out of the sky. We hold a high bar for what we expect machines to do. This becomes interesting to explore as we start to consider why people will use certain technology.

The path to the future is going to be driven not by scientific progress but by cultural and societal progress.

How do you think this renaissance will evolve in the advent of AI, including its timeline?

The adoption gap [during the Renaissance] was a result of a policy issue, not a technology issue. In our case, the diffusion and many of the adoption gaps will take far less. But we should have some humility to recognize what we consider a long time now—ten years, perhaps—might have been considered no time at all 500, 600, 700 years ago. This maps to the general theme of the book: the idea of super-linear, parabolic progress and parabolic change. Some of it is for better or worse. Yet the idea that we’re on an incredible new slope gives people a frame of reference as they start to consider how much things will differ.

I hope to inspire two feelings in my readers: humility and hope.

How will this renaissance affect the future of work?

When I talk about job loss, very few people are actually willing to discuss it academically. [They approach it from the perspective of] will we have work for people to do, and if we don’t, what happens? That totally misses the point. I see it as my job to reframe the future-of-work argument.

I think there will be more work. Humans will find things to do, and we should have the humility to remember this. We lack the humility to consider all of the things that can be that we can’t yet fathom. Actually, humans are pretty bad at imagining an unknown universe. That’s why we anthropomorphize aliens as having two arms and two legs. We don’t know what creatures could look like otherwise.

The idea that work could get better is somehow upsetting to people. I do have to reframe it, and I tell people that I don’t think this is an economic problem.

When I ask people, “What are the major risks of the current automation boundary?” They say, “We’re going to lose jobs.”

We forget that we are descendants of people whose jobs were automated to our collective economic benefit. We don’t think about them at all. We wander the earth with more abundance than they could have imagined, very grateful that we don’t lay bricks by hand anymore, and that we don’t mill cotton or most of our food by hand anymore.

People bemoan the loss of these trades but also love that everyone has so much more [now]. So I see it as my job to remind people that everyone basically wants everyone else’s jobs to automate, not their own.

We’re all very eager to see the world get better, faster, and cheaper without realizing what that means. It means extricating people from the manufacturing of the goods and services that we want to be better, faster, and cheaper.

One of the most important things I can do is to help people see that job automation is not [just] an economic issue; it’s an emotional one. We are facing an identity displacement crisis, not a job displacement crisis.

One of three things will happen:

  1. We will automate a ton of work and have an incredible economic boom. Then we will have to figure out how to distribute the gains.
  2. We automate very little and become stagnant economically.
  3. We automate some jobs, and then protect a bunch of others. This [scenario] would be tricky.

This presents the most glaring issue of the distribution problem. We will have a very asymmetrical, imbalanced automation path where the adoption gap will appear odd in some sectors. That imbalance will present some economic strangeness. Some people won’t have the work that they used to have. Yet somehow, others still have to stand in line to mail something because we didn’t automate the postal service.

Job automation is not [just] an economic issue; it’s an emotional one. We are facing an identity displacement crisis, not a job displacement crisis.

This idea encapsulates the argument in the book that I make about the distribution problem. The next 20 years will be most tricky, because some people will look at others and say, “Why isn’t your job automated?” The response will be, “Because a politician said it can’t be.” That market factor will be very complicated, perhaps for the next 50 or 100 years.

What are the implications of gen AI for younger workers, in particular, new college graduates?

A lot of the job fears are overblown. On a macro basis, they’re terribly overblown. That’s because if we automate most work, something good will happen. I remind people of this all the time. Every time we put a small farmer out of business, it means that far more people could be fed by industrial farming.

One could bemoan the loss of something yet acknowledge the economic gain that lifts people out of poverty. If one wants to go back to small farming, one can. It just means that it will come at the cost of hundreds of millions of lives. If you return to that hyperlocal farming, a lot of the world will plunge back into food scarcity. Now there is a panacea. You could have organic living, local living, but we aren’t there yet technologically. That probably requires fusion, desalinization, small modular reactors, and more.

I would argue that the problem we are presented with is that the uncertainty in the market alone is casting an enormous shadow. I challenge people to see it as an opportunity.

We don’t know what the next ten years will offer. They could be an opportunity to say, “Well, the cost of living in the United States is spiraling out of control, and this is a chance to completely reset it. We could do it.” The alternative would be to say, “Well, we don’t know what it will look like, and that means it will be bad. Given my discussions with management consultants, what you will see, in knowledge work in particular, is the apprenticeship problem.

It means that even if we hire more young people for certain roles, the nature of that job is going to change. We will automate a lot of the work that others had to do. That’s the “grunt work” that most people did not want to do. Young lawyers, management consultants, and financial analysts spent 100-hour weeks doing the tasks for which the partners were billing. Now we realize that if we automate that work, apprenticeships will begin to change.

The change presents a strange problem. The reason that hotels are so well run is that most people in hospitality management have done every job in the hotel. When you start to strip away the work that needs to be done, such that the on-ramp to the job is much smoother, it’s not that the work gets worse but that the understanding of the function starts to crumble. That will be a problem. There is value in the journey to taking charge of a function. Most management consultants who work their way up at a firm remember what it was like as an associate who did everything.

The nature of that changes. In 20 years, we will see companies that are run by people who don’t necessarily know all the work that was once done. As a result, they may not appreciate the nature of the work as much. Things will be much more efficient and much cheaper. Yet it doesn’t mean that we will be as connected to the actual nature of the work. That will spell a huge change in knowledge work, and we will see some specialization and probably some fragmentation. The reason is that we will see actual groups begin to self-select into very specific functions.

The process by which you accumulate skills is as important as the skills you’re accumulating, and in many cases more important.

It’s up to us to minimize costs and risks and maximize benefits. What do you think will be the hardest to manage?

We should spend significant time talking about risks that are not talked about enough.

One is the alignment problem, which is poorly defined and begs redefining. That’s the question of “Does the machine care about our values, and does it concern itself with the consequences of its actions?” This becomes particularly problematic, but not because alignment is hard to do. We’ve actually proven that we’re pretty good at alignment. It’s problematic because there is no North Star.

When people talk about “p(doom),” not everyone agrees on what alignment looks like. Alignment to one is misalignment to another. This is true on a religious, cultural, or geopolitical basis. That will lead to different aligned AIs arguably competing with each other. That presents an obvious risk.

The other weird outcomes relate to explainability: models that are exceptionally good but cannot or do not explain themselves well. This perpetuates the world that we live in a way that feels very inexplicable to me. We don’t really know why we believe what we believe, and we certainly don’t know why someone else believes what they believe. AI should present more explainability, as long as we require it in the model.

One of the dystopian outcomes I often talk about is a panacea where everything works, but we don’t know why. The problem with that world is that we could wake up at any moment and realize that it’s working for the wrong reasons.

[Dystopian film thriller] Soylent Green is one example. I won’t spoil the surprise, but in the film, they eventually discover what Soylent Green [a food substitute] is actually made of. We need to constantly reinforce the importance of models that explain themselves so that we build robust systems where the box is not black but, in fact, very transparent.

The risk that I worry about most is bad acting. I refer to bad acting not in the sense of doom, but in the sense of catastrophic events that could really set progress back. I’m not referring to high-resource bad acting, such as nation states and James Bond villains, but low- and medium-resource bad acting. I’d like to emphasize that the ability of an individual to do a lot more is great for 91 percent of the population. That means that 91 percent of the population is either Agent Zero or Agent Hero; they do nothing or are well-intentioned.

Roughly 9 percent of the population is Agent Nero. They are ill-intentioned or antisocial, either because they’re psychopathic or they’ve figured out how to make crime pay. We need to concern ourselves a lot more with that population. We do not engage enough in the international discourse—the ability for individuals to steal, cause harm, and produce bioweapons.

This issue needs to be front and center in the debate. Making it a p(doom) issue makes it too abstract. Viewing it as a case of “Can an individual steal hundreds of millions of dollars now?” makes it so much more grounded and honest. That enables people to say, “Yes, this is a problem that we need to address.”

What advice would you give the younger generation growing up in a different technological age?

Young people have an incredible opportunity to build a much better world than we can imagine today.

One of my challenges to everyone—young and old—is to seize the opportunity to learn how to live. The cultural and spiritual decay that comes from vapid brain rot that people are experiencing right now is a very imminent threat. We must avoid that trap in a world where people can outsource all of their critical thinking and subsist quite comfortably because we have automated so much. I argue that the way to avoid the trap is to learn how to learn. One of the ways is to reinforce the ability for any individual to explore the limits of their own personal, emotional, mental, and physical capabilities at a very young age.

The process by which you accumulate skills is as important as the skills you’re accumulating, and in many cases more important. Knowing how to play a piano is interesting insofar as you can play the piano, and that may be very rewarding.

Yet that process also teaches you how to read music, which teaches you how to read other languages, which teaches you the thrill of actually mastering—or of trying to master—something. For many people, that is now a possibility from a time standpoint, but it’s not actually a possibility from an interest or an inclination standpoint.

Author Talks

Visit Author Talks to see the full series.

Explore a career with us