Getting to scale with artificial intelligence

In this episode of the McKinsey Podcast, Simon London speaks with McKinsey senior partners Tim Fountaine and Tamim Saleh to explore how far most companies are along the road to adoption of artificial intelligence at scale, and how the companies furthest ahead got there.

Podcast transcript

Simon London: Hello, and welcome to this episode of the McKinsey Podcast, with me, Simon London. Today we are going to be getting practical with artificial intelligence. By now, it’s common knowledge that AI holds immense promise across a wide range of applications—everything from diagnosing disease to personalizing websites. But how far are most companies along the road to adoption at scale? When you look at the organizations furthest ahead, how did they get there and what are they doing differently?

To answer these questions, I spoke with a couple of McKinsey partners who are working with clients on exactly these issues. Tim Fountaine is a partner based in Sydney, Australia, and Tamim Saleh is a senior partner based in London. Tamim and Tim, welcome to the podcast. Thank you very much for being here.

Tamim Saleh: Thank you.

Tim Fountaine: It’s a pleasure to be with you.

Simon London: We’re going to be talking not just about the exciting promise of AI, which to be clear is very real, but how in practice—on the ground in real organizations—the promise can be realized. Tim, maybe you take first crack at this. What do we know about how far along most companies are in the journey?

Tim Fountaine: Well, I think you’re right. There’s a lot of excitement about the potential of AI, and there are some wonderful examples of AI making real progress and being able to help with diagnosing diseases and healthcare, improving customer experiences, and so forth. But most companies that we’ve talked to in the last few years are not making progress at the pace you might assume from all the newspaper articles. In fact, we did a recent survey of 1,000 companies, and we found that only 8 percent of firms that we surveyed engaged in practices that allowed widespread adoption of AI.

The vast majority of companies are still at the stage of running pilots and experimenting. We still believe that AI will add something like $13 trillion to the global economy over the next decade, but putting AI to work at scale remains a work in progress for most companies.

Simon London: The companies that are doing this well—the 8 percent you mentioned that are putting the practices in place to get to scale with AI—what are they doing differently?

Tim Fountaine: The first thing is they tend to be ahead [in] digitization, generally. There are particular industries where that’s happening more. For example, financial services, telecoms, media, high tech—they’re really leading the way, as you can imagine. They don’t have physical products to the same extent as other industries. They’re really about data and digital information, so, of course, AI is highly applicable in these industries. But no matter which industry companies are in, the ones that are doing the best are paying real attention not only to the technology but also thinking about how it changes their organizations and what kind of culture they need to build in order to be able to take advantage of these new technologies.

The ones we see doing well are doing three things right. The first is, organizationally, they’re moving from siloed functional work to cross-functional teams where people from the business, people from analytics, IT, operations all work side by side to achieve particular outcomes. The second one is changing how they make decisions. It’s much less top-down, much less judgment based, but much more empowering frontline teams to make decisions not only using judgment but also using algorithms to help improve the way they make decisions.

Finally, there’s something about mind-set, something about moving from being risk averse and only acting when you have the perfect answer to being much more agile, willing to experiment, being adaptable, being willing to fail fast, but learn fast and get things out quickly.

Simon London: Yes. I mean, on the one hand, that makes a lot of sense. On the other, what you’re describing there, Tim, sounds like wholesale change. It’s a lot of change on a lot of different organizational dimensions. Tamim, let me bring you in here. In practical terms, in your work with clients, where do you even begin?

Tamim Saleh: One of our clients, for example—a leading European steel manufacturer—wanted to industrialize AI. It wasn’t just about doing a number of pilots or MVPs [minimum viable products] or tests. The CEO, who I remember in the very first discussion we had with him, looked at the problem as a people problem. He didn’t want a technology story or “here are the use cases.” He actually asked a question: “How will my people deliver AI? What kinds of skills do they need to have? How do I fit this into our culture?”

Some of the things that they looked at, for example, were to understand what proportion of their organization needs to be [technologically] literate. They quickly came to the conclusion that the concept of a translator—people in the business, whether they are in operations or in sales or in quality management, who understand how analytics are applied—was needed. Then they used their knowledge to work with the data scientists and the data engineers to produce the initiatives and the use cases and industrialize and deploy them and make sure that they continuously developed. They budgeted, for example, for the adoption, the training, and the development of people as much [as], if not more than, for the technology itself.

They spent a lot of time on training. They built an academy for analytics that trained 400 of their 9,000 workers in the first year. That led them, within a period of 18 months, to produce 40 initiatives, with a 15 percent EBITDA [earnings before interest, taxes, depreciation, and amortization] improvement. If anything, they are continuing to accelerate the level of application of analytics. In fact, the objective is that the penetration of analytics will be in everything that they are doing. It becomes business as usual. The key lesson learned out of all of this is that when a company wants to apply analytics, they should look at the problem not just from the technology end or the data quality but the people side and the mind-set.

Tim Fountaine: One of the things we often see companies getting wrong is they’re building analytical models—AI models—but really failing to think through how does that change the business. I think one of the things the companies that are getting it right have realized is that AI is just another tool for solving business problems or achieving business outcomes. As such, AI is a way of changing a workflow, changing the way that people work together. One of the things we’ve found in our survey is the companies that were doing best were spending as much of their money or budget on change and adoption—workflow redesign, communication, training—as they were on the technology itself.

Simon London: Let me just clarify there. Companies are spending as much on training and adoption as they are on the actual technology. Because I think a lot of people might find that surprising.

Tamim Saleh: Yes. A lot of people might find that surprising because the assumption is that in order to deploy analytics, you need to invest heavily in data management and quality and buying the technology. But the vast majority of problems, the blockers, happen outside the agile analytics labs. It happens, for example, because the finance budgeting process does not cater to the fast development of use cases. Or it happens, for example, because the HR function is not familiar with how to recruit data scientists. What does an experienced data scientist really look like? Or it happens, for example, because the IT function is not designed in a way that they can rapidly access data in many, many data sources, so that you can implement use cases rapidly.

Increasingly, organizations now realize that the battle is not just to buy the technology or create small, agile teams that produce pilots but to actually think of agile for the organization in totality and then begin to address and make decisions in areas like training and budgeting. To cut the story short, the battle cuts across the entire organization and the entire management team—whether it’s the CFO, HR director, CIO, CMO, they all have a role to play in lubricating the process. The operating model works end to end to deploying analytics at scale. That’s why people are now beginning to put more attention and budgets outside of the technology area.

Tim Fountaine: Just to take an example that is quite a common one from mining or heavy industries: predictive maintenance—moving from maintaining equipment to stop it from breaking, maintaining it at regular intervals, to a system where you use AI to predict when machines are going to break, then being able to intervene just at the right time to stop things from breaking or be able to accommodate that in the operations. The analytics of that has been done dozens and dozens of times around the world. It’s certainly solvable.

The hard thing—and often it’s surprising to people—is that to be able to take advantage of that, AI technology means totally changing the way companies maintain equipment. It means rostering your maintenance staff differently; it means ordering spare parts with a different frequency; it means scheduling how your mind works differently to accommodate predictive maintenance of equipment. It’s a huge change, and it’s not just about the technology or the AI application itself.

Simon London: Is there an element here that’s about overcoming fear? I can imagine that when a lot of people hear that their company is going to deploy AI at scale, quite frankly they worry about whether their jobs are still going to be around.

Tamim Saleh: Yes, indeed. One of the big issues is that people assume that an AI-enabled transformation will replace everything that they are doing. The reality is, AI itself is not superuseful; it’s actually man–machine, human–machine—for example, tasks like demand forecasting in supply chain or tasks like targeted marketing. [AI] is most powerful when you have the experience [of] demand forecasters or marketers knowing how to use AI to make much better decisions.

For the vast majority of activities or tasks that people are doing, you still need human judgment, but working together with AI you get much better outcomes. Awareness is important, and there are, increasingly, many companies that are not just training the core 10 percent or so who are delivering AI but are also making sure that the entire organization, through online training and other forms of training, understand how AI will work in the environment, how to live with it and benefit from it.

Tim Fountaine: One of the other things that companies doing this well have managed to create is a portfolio of AI initiatives. One part of that is being able to balance building to the long term and really changing how business works using AI, at the same time being able to deliver things quickly to maintain momentum, build some excitement, and show the potential. For example, one retailer that’s adopting AI as part of its category-management process, they eventually want to use AI to completely change how they think about space and what kind of assortment they have in the store. But that’s going to be a multiyear process. While they’re building toward that, they’re using the same data and a lot of the same ideas to provide a little tool to store managers so that they can order a few extra items that AI predicts will sell well within their stores, to generate some initial sales, generate some initial excitement, show the potential, and buy the time needed to do the more ambitious reorganization of their assortment in the stores.

Tamim Saleh: The point about the portfolio of AI initiatives is that sometimes companies or people mistake it and think about it as a list of initiatives, but it is not a list.

Simon London: Basically, it cannot be just a grab bag of use cases that have been harvested from across the company. There has to be thought given to the staging and the rollout and the sequencing of these over time.

Tamim Saleh: Correct, yeah.

Tim Fountaine: One of the things, I think, that companies who are doing well have realized is, yes, you can find interesting places where you can apply AI models across your company, but it doesn’t fundamentally change the way you do things.

Simon London: Double click for a moment on this concept of the AI academy. What are the elements that you’ve seen in practice that contribute to a successful academy or an academy-like program?

Tim Fountaine: One of the things is starting at the top. The organizations we see that are doing this best start with the board and the executive team, including the CEO, and making sure that the top managers, the top decision makers, in the organization really understand it. The other thing is not just focusing on technical talent for training but really emphasizing the training of translators: people who have, potentially, been in business for a long time and don’t know much about machine learning, but they do understand how the business works.

Take the steel-company example. This might be people who are overseeing shifts of engineers who are working on particular parts of the machinery—teaching them about AI so that they can then work with data scientists and engineers to design solutions that are right for their business. [It’s important to] understand the data properly and make sure people think through some of the implementation challenges at the other end.

Tamim Saleh: The other thing that is important is that this is not classroom training, where a data scientist learns data science or a translator learns translation. It’s training on the job.

Want to subscribe to The McKinsey Podcast?

Simon London: What’s your advice for senior executives at a company that’s on this journey? What can you do? What are the behaviors that you can model so that you become part of the solution here and not part of the problem?

Tim Fountaine: Well, one CEO who’s been very successful in driving AI in their company began by setting the right example. I think this is important. The first thing he did was to show up to the analytics training—just like everyone else, get stuck into some coding and ask questions about how machine learning works and so on. For a lot of leaders, it’s quite uncomfortable leading in a world where you don’t really know all the answers yourself and you’re going to rely on data scientists and engineers and other types of experts to advise you. One of the best things you can do is just be humble and ask lots of questions and be open to taking advice from others.

Then, of course, one thing the CEO did well was one of their first initiatives didn’t actually work. It wasn’t because of anything the team could have done differently; it was just that it was a hard problem. That was a real moment of truth for them. In this case, the CEO was great and said, “I think you’ve done a wonderful job. I really celebrate that you took the risk to do this. What have we learned, and what can we take forward to the next thing?” Of course, if he had said, “Gosh, what a disaster, this is terrible,” that would have shut down the whole thing for them.

The other thing that this particular person did was also make the businesses accountable, not the AI specialists or the chief analytics officer. He always made sure to talk to the business owners, the product owners, the heads of the businesses where these ideas were going to be implemented, to ask them how it was doing, to report back on what was happening. He rigorously tracked what was happening and where things weren’t moving as fast or as quickly; he asked questions and helped people solve the problems.

Simon London: What about the organizational-design piece—this question of whether to have analytics resources sort of clustered at the center or, on the other hand, pushed out into the business units and functions?

Tim Fountaine: Well, it’s not an either/or decision; you actually need both. You need some kind of central hub, as well as capability out in the businesses and what you might call spokes.

We know that from our survey. Companies that are doing well with AI are three times as likely as their peers to have some kind of central capability.

The responsibilities that are almost always best managed centrally are things like data governance, setting systems and standards for AI, recruiting and training, and even defining what it is to be a data scientist at your company. Of course, there are other things that are much better done out in the businesses, in the spokes. Those are things like workflow redesign, choosing where to focus organizational change—that needs to be done as part of implementing an AI solution.

Tamim Saleh: It’s interesting, Tim. Three or four years ago, some companies went for a completely distributed model, with no hub. They ended up creating new types of complexity: teams in different parts, trying to sort the problem—the same problem—with different methods, different data architecture, or IT architecture. They never managed to scale.

The reverse is also true. Some companies centralized analytics completely. That led to other sorts of problems that were quite far from the business. The business didn’t buy in. Over time, their hub-and-spoke model evolved because of the pain that some of the companies endured. The two extremes, in most cases, don’t work.

Simon London: At the risk of a wild generalization, it sounds like companies that are struggling to get to scale with AI probably haven’t invested enough at the center. Do you think that’s fair to say?

Tim Fountaine: I think that’s true, although the more mature companies are, I think, the more they can push things out into the spokes. But it does require having some standardization and a culture where people will stick to that.

Tamim Saleh: Yeah, it’s not easy for many organizations, because the issue here is that you need to get the balance between common language, common protocols, common methodologies, because analytics has a network effect. You need to be able to connect use cases together over time, and that requires discipline. At the same time, you need to give the businesses the freedom and access to skills inside their businesses in a distributed way. It’s not natural for most organizations, which are functionally led, to have that model.

Simon London: Maybe just take that down to the level of an individual initiative: a project team charged with implementing a use case. What roles do you need? What’s the mix of people from the hub versus the spoke, and what are some of the common mistakes?

Tamim Saleh: The teams need to be interdisciplinary teams end to end, from the business concept to the development of the design, as in the user-experience design and how you use the use case to the mathematics itself and the data science. Then the technology, in terms of the data ingestion, data engineering, and then the technology underneath that in terms of that platform.

Most importantly, the interdisciplinary teams should be outside of these labs, in terms of how you industrialize that use case—the training of the users, any interfaces that need to happen with processes, any changes that need to happen in processes outside. When you get teams working in this form, they are much more productive. You have a much higher probability of getting it right the first time or closer to that, and a much higher probability of the use case being relevant and applied. There are some key roles—in particular, like the product owner. That would be the manager in charge who is responsible for the new AI tool’s success. It should be important to his or her business. The translators are the people who are literate [in] that business domain and take an active part in developing the use case with the data scientists and data engineers.

Then you’ve got the experts, like the data architects and scientists and designers and visualization people. Outside that group, one needs to think about industrialization for the professionals who do that training and the tracking, which would cover people from change management, org design, to finance professionals. That’s quite often the part that is missed. Even today, as we speak, I would say the majority of organizations pay little attention to what is outside the immediate agile team of experts and translators when it comes to productionizing. This is something that we’re speaking about a lot with our clients, trying to make sure that there’s a lot of awareness and prioritization of that part as well.

Simon London: So again, it’s the adoption piece, right? You can come up with a solution that potentially can add a whole lot of value to the business, but you have to get it adopted.

Tamim Saleh: Exactly that.

Tim Fountaine: One other thing that’s important is actually tracking value. We see a lot of companies implementing models but never following up to see how well the change associated with that model occurs and whether or not it’s working and being able to improve the models over time. That value capture, measuring every few weeks, isn’t working. Then being able to course correct accordingly is crucial.

Simon London: Just say a little bit more about the product-owner role. Clearly, that’s pivotal. Is that a person who should be a deep expert coming from the center? Or is that someone who should be pulled from and reside in the business?

Tim Fountaine: It’s important they come from the business. They’re going to be the person who goes back to the business and tries to convince everyone to adopt this new tool or ways of doing things, so they have to really understand how things work in the business. They have to have the trust of their peers to be able to convince them to do it, and they need to be around for the long term to be able to make sure this particular solution gets implemented.

Tamim Saleh: A good product owner should be somebody who wholeheartedly and absolutely understands the value of analytics in his or her business. More often than not, analytics will change the way they work. For example, if you are a product owner in retail, and you are getting much more granular insight on what you could put on the shelves for individual stores, that will have an impact on the way you do logistics, replenishment, and promotions.

Therefore, you need to change the way your people work. That’s very different than a product owner that sees analytics as a use case for an individual task or part of a list. A good product owner needs to see the big picture and think of analytics as a journey.

Simon London: I think we are, sadly, out of time for today. But Tim and Tamim, thank you very much for doing this.

Tamim Saleh: It’s a pleasure; thank you very much.

Tim Fountaine: It’s a pleasure, Simon.

Simon London: And thanks, as always, to you, our listeners, for tuning in to this episode of the McKinsey Podcast. Please do visit us at McKinsey.com or download the excellent McKinsey Insights app to learn more about advanced analytics, AI, and how they can be applied to your business.

Explore a career with us