Three experts offer an inside look at the state of AI

The results from McKinsey’s most recent survey on the state of AI are in. Conducted during a year of pandemic, it covered some 1,800 respondents from across a range of industries around the globe.

According to our findings, the adoption of AI continues to build; a full embrace of best practices is critical to high performance; and risk management remains complicated and challenging. It’s the 4th year we’ve run the survey, and the first time MLOps (short for machine-learning operations, the term refers to best practices for the commercial use of AI) and cloud technologies emerged as critical differentiators.

Below, three of our experts share an inside look at the research: Michael Chui talks about the latest AI trends, Liz Grennan walks us through the complex world of AI and risk, and Kia Javanmardian explores MLOps, one of the industry's hottest topics.

Michael Chui, partner

What stood out for you in the findings?

The companies that are deriving the most benefits from AI are professionalizing or industrializing their capabilities. These high performers can attribute the greatest percentage of their profits to their use of AI. They weren’t necessarily spending more, but their project costs tended to stay in budget. Indeed, other companies were far more likely to have AI costs overruns.

The findings revealed that AI high performers follow many of the best practices. Is it an all-or-nothing scenario?

No, but the benefits are multiplicative; best practices do interlock in their benefits.

High performers have adopted MLOps, a set of practices and component tools (analogous to DevOps in software development/deployment) that have emerged over the past few years. When you put them together, they allow you to do things like train, deploy, and test models many times faster than when AI is approached as a craft. When you automate and industrialize these processes, you can repeatedly and predictably achieve significant returns in your AI investments.

According to the research, cloud is a key enabler for MLOps. Why is that so?

First, native to cloud environments are off-the-shelf tools, libraries, and frameworks that can speed up the AI model-development life cycle. Cloud also provides flexibility to ramp compute up and down as needed, which is especially useful for retraining models when necessary.

Together, these survey findings indicate that the combination of MLOps, cloud, and applying other best practices provide a good foundation for capturing AI value at scale.

The state of AI in 2021

The state of AI in 2021

The results of this year’s McKinsey Global Survey on AI indicate that AI adoption continues to grow and that the benefits remain significant.

Liz Grennan, expert associate partner

How did risk and AI factor into the latest findings?

The highest performers are also those who are addressing risk management in AI. One worrisome finding is that cybersecurity continues to fall on the list of concerns. There’s no set cyber standard across organizations. It underscores, for me, the need for every organization to come up with its own framework, which isn’t easy. We see three risk categories: cyber, data, and AI. They’re all completely interdependent and require an integrated risk model.

What are some of the consequences of poor AI risk management?

One of the worst things is that it can perpetuate systematic discrimination and unfairness. Specifically, this can mean women not getting hired due to biased training data. People of color being denied employment, loan consideration, housing, and other benefits because the data is biased. In one pandemic-era example, certain students unable to sit for exams were excluded from university simply because they came from a historically poorly performing high school, despite their own excellent personal records. The algorithm generating proxy test scores was inherently biased.

Without AI risk management, unfairness can become endemic in organizations and can be further shrouded by the complexity.

How does a company start with an AI risk management program?

The easiest, and maybe the highest and best place to start, is to establish a set of ethical values you want for your business and then sort through how to operationalize those values into a framework and determine a lens through which you will start evaluating risk.

Thematically, fairness and privacy are two highly important values, in addition to security, explainability and transparency, model performance, and safety. And it’s important to understand the regulations that are in place for the applicable industry and geography.

How does AI risk relate to cyber risk?

A complex AI system is the perfect target. And the more scaled up the AI, the bigger the threat. A bad actor can inflict damage if they break down the model and insert bad, faulty, or incorrect data, impacting large numbers of people in very personal, profound ways.

What makes you optimistic?

I work with an organization that aspires to be a global leader on human rights issues because they are ‘conscience first.’ They’ll state a values-driven aspiration and then they’ll weigh it for feasibility, costs, and other relevant business drivers. They want their values to be a market differentiator—it’s that sort of position that makes me optimistic.

Kia Javanmardian, senior partner

Why is MLOps becoming critical to AI implementations?

We have been using the car-factory analogy and it holds up pretty well: MLOps is the factory you build to scale your analytics. There are a few big picture concepts.

A first step is to shift some of what you spend on R&D and pilots to building the infrastructure that will allow you to mass produce and scale your AI projects. You also need to be monitoring the data your models are using—to stick with the car analogy, a gas gauge or dashboard—so that you can track the quality of the data going in and out of your models and their level of performance.

Third, if you are building every car from scratch, down to the door handle, it’s going take you an awful lot of time and energy to build each car.

So, where does MLOps fit in?

It’s based on the concept of building a library of standard parts or code. Your data scientists will go from creating models to spending a good chunk of their time assetizing them, converting them into reusable Lego-like parts.

Finally, you don’t have the people who designed the car, your best engineers, assembling the car and maintaining it. Instead they focus on what it takes to get the horsepower in the engine to go from 400 to 800 today.

How widespread is MLOps?

Digital natives like Google and Amazon have been practicing MLOps for years to build their products. Very few non-digital natives are using it at scale.

What do people think when they hear about it?

Managers get the problem, but they can be overwhelmed with the solution. It’s very technical, involving a data infrastructure, governance, risk practices, and systems. It’s about organizational change, talent mix, evolving roles.

It can be overwhelming. But the companies that are practicing MLOps are getting orders-of-magnitude higher returns for the same relative AI investment.

Never miss a story

Stay updated about McKinsey news as it happens