An inside look at how businesses are—or are not—managing AI risk

In the past eight months, the growth of generative AI capabilities has been dazzling, elevating AI from the domain of tech teams to the top of CEO agendas. More than two-thirds of companies are expecting to increase their AI investment over the next three years.

But like any significant technological advancement, generative AI brings with it new risks, and it can also exacerbate existing risks. In our newest research, The state of AI in 2023: Generative AI’s breakout year, only 32 percent of respondents say they’re taking measures to prevent inaccuracy, while 38 percent, down from 51 percent last year, are managing cybersecurity risks.

To learn more, we caught up with Liz Grennan, an expert associate partner who leads our work in digital trust worldwide, and Bryce Hall, an associate partner and co-author of the report.

The explosive growth of generative AI tools was clearly the headline from this survey. What else are we seeing?

Liz Grennan, a McKinsey expert associate partner, and Bryce Hall, a McKinsey associate partner
Liz Grennan, a McKinsey expert associate partner, and Bryce Hall, a McKinsey associate partner
Liz Grennan, a McKinsey expert associate partner, and Bryce Hall, a McKinsey associate partner

Bryce: Businesses are recognizing that the newer generative AI tools have crossed a threshold from shiny new object to demonstrating commercial viability and creating value across various business domains.

Part two of the story is that theres broad recognition—and heightened concern—about the risks. Many existed before with traditional types of AI—privacy, equity, anti-bias, and explainability. But now there is increased awareness of other risks that are particularly salient with generative AI tools: accuracy, IP protections and attribution, cybersecurity, and others.

One of the most colorful analogies I heard about generative AI was: “We've just opened Jurassic Park, but we haven’t yet installed the electric fences.”

How has the landscape of AI risk management changed?

Liz: To Bryce’s point, with the speed and advancements in technology, we don’t even know yet what type of electric fence to put into place.

One of the hard things is the rise and scale and speed of malicious use. “There will be a lot of tears” is how one law firm partner characterized situations where companies have delayed risk management—whether due to complexity, procrastination, or lack of perceived need.

What’s different now is that we are seeing a lot of case studies about reputational damage, customer attrition, and erosion of market value, along with increased fines and regulatory scrutiny.

One of the biggest changes over the past few years is the concept of personal liability for senior executives and board members for oversight failures. Leaders must maintain a good understanding of innovation being launched in their organizations, especially if its output impacts their customers.

How have companies been responding to these increasing challenges?

Bryce: We’ve seen some organizations take a little bit of a “head in the sand” approach, for example, with banning the use of generative AI in their organizations.

Others have taken a wait-and-see attitude, watching as some explore the frontier, seeing how that plays out, and then adopting a fast-follower strategy.

Leading companies are doing a few things. Typically, in the development of an AI solution, we would have a pod that would include data scientists, data engineers, UX/UI designers, and business leads. But now legal and cyber risk experts are also being looped in at the start.

Second, in leading companies adopting generative AI, chief risk officers are playing an even more critical role in the C-suite.

The third thing is a clear, structured approach for developing and then rolling out new applications to beta users or “red teams,” to really test the guardrails and identify where strong parameters need to be put in place.

What best practices are we seeing from the leaders?

Liz: What distinguishes the leaders is that AI is not treated as a one-off spoke, as opposed to their legacy business; it is in the core of everything they do, and so you need risk management and controls embedded in every aspect as well. It must be foundational.

We will steward your data well, managing it from a risk perspective that includes privacy, security, confidentiality.

Liz Grennan, McKinsey expert associate partner on our 10 Responsible AI Principles

Organizations shouldn’t be building from scratch. The way I think of it is around “train tracks” that organizations have built to handle, for instance, privacy, security, and basic data governance. By train tracks, I mean the people, processes, and technology that span across the organization to manage risk. They will have to load their AI risk controls onto these tracks and upskill their people to manage it.

And because the technologies are changing so fast, the risk controls themselves will have to be continually updated. So vendors are bringing the best they have to market—with monitoring and feedback loops built in—knowing its going to be changing quickly. Companies are using agile methods, not just for quick triage, but for iterating their risk controls and responding as scope and features change. The risk controls have to keep up, as do the teams that apply them.

McKinsey has recently published 10 Responsible AI Principles. How else are we serving clients in this space?

Liz: Yes, as a leader, we want to be clear with our clients that we take AI and data stewardship very seriously. We are values driven and this includes how we approach data, so we are saying in effect: “We will steward your data well, managing it from a risk perspective that includes privacy, security, confidentiality.” We want to also share these principles with the world at large, as guidance to help in setting standards for protecting society.

In turn, we are helping our clients develop their own approaches to responsible AI and data use. We design environments for the new agile risk management. And we counsel control functions to “shift left,” or move testing and performance evaluation earlier in the process, while remaining efficient. Every client is at a different stage of this journey, so we tailor our approaches to where they are, always mapping to a business’s strategic objectives—and to creating value.

Bryce: To build on Liz’s comment, our ability to help clients develop their own approaches to implementing responsible AI is a differentiator for us.

Two years ago, for example, everyone was saying explainability was essential. Now, we realize that isn’t always possible at a detailed level with foundational models. What is possible is a focus on quality: quality of the inputs and the accuracy and fairness of the outputs.

Blue wave in a circular pattern - RAI generated image

Responsible AI (RAI) Principles

We believe Artificial Intelligence has the power to transform business and to help our clients and our people harness that potential in an ethical, legal, and sustainable way, we’ve developed these guidelines.

What changes should organizations be anticipating in AI risk management over the next six months?

Liz: We will see steps toward the formalizing of some global standards. The White House right now has voluntary AI governance measures for safety, security, and trust to which they’re asking people to subscribe.

The EU AI Act may be in effect by early 2025, spurring a range of compliance programs that will then be required by law. There’s wisdom in orienting around what’s coming there—similar to the GDPR journey—that really established privacy programs worldwide.

Over the next year, I think the societal and organizational value of AI will increase. In fact, in related research in the US, we found that consumers are now considering the trust aspect of a product and service almost as important as price and delivery time. This was astounding to me, with significant implications.

The stage will be very crowded with both virtue and vice. One of next year’s tech trends could be the use of AI to combat the harms of AI because we can’t solve this with traditional means. It may take the power, size, and scale of AI to counter these adverse effects.

Never miss a story

Stay updated about McKinsey news as it happens