Operationalizing machine learning in processes

| Article

As organizations look to modernize and optimize processes, machine learning (ML) is an increasingly powerful tool to drive automation. Unlike basic, rule-based automation—which is typically used for standardized, predictable processes—ML can handle more complex processes and learn over time, leading to greater improvements in accuracy and efficiency.

But a lot of companies are stuck in the pilot stage; they may have developed a few discrete use cases, but they struggle to apply ML more broadly or take advantage of its most advanced forms. A recent McKinsey Global Survey, for example, found that only about 15 percent of respondents have successfully scaled automation across multiple parts of the business. And only 36 percent of respondents said that ML algorithms had been deployed beyond the pilot stage.

A central challenge is that institutional knowledge about a given process is rarely codified in full, and many decisions are not easily distilled into simple rule sets. In addition, many sources of information critical to scaling ML are either too high-level or too technical to be actionable (see sidebar “A glossary of machine-learning terminology”). This leaves leaders with little guidance on how to steer teams through the adoption of ML algorithms.

The value at stake is significant. By building ML into processes, leading organizations are increasing process efficiency by 30 percent or more while also increasing revenues by 5 to 10 percent. At one healthcare company, a predictive model classifying claims across different risk classes increased the number of claims paid automatically by 30 percent, decreasing manual effort by one-quarter. In addition, organizations can develop scalable and resilient processes that will unlock value for years to come.

Four steps to turn ML into impact

ML technology and relevant use cases are evolving quickly, and leaders can become overwhelmed by the pace of change. To cut through the complexity, the most advanced organizations are applying a four-step approach to operationalize ML in processes.

Would you like to learn more about our Operations Practice?

Step 1. Create economies of scale and skill

Because processes often span multiple business units, individual teams often focus on using ML to automate only steps they control. That, we find, is usually a mistake. Having different groups of people around the organization work on projects in isolation—and not across the entire process—dilutes the overall business case for ML and spreads precious resources too thinly. Siloed efforts are difficult to scale beyond a proof of concept, and critical aspects of implementation—such as model integration and data governance—are easily overlooked.

Rather than seeking to apply ML to individual steps in a process, companies can design processes that are more automated end to end. This approach capitalizes on synergies among elements that are consistent across multiple steps, such as the types of inputs, review protocols, controls, processing, and documentation. Each of these elements represents potential use cases for ML-based solutions.

For example, several functions may struggle with processing documents (such as invoices, claims, contracts) or detecting anomalies during review processes. Because many of these use cases have similarities, organizations can group them together as “archetype use cases” and apply ML to them en masse. Exhibit 1 shows nine typical ML archetype use cases that make up a standard process.

1
Nine machine-learning archetypes can be used to redesign processes across an organization.

Bundling automation initiatives in this way has several advantages. It generates a more attractive return on investment for ML development. It also allows the implementation team to reuse knowledge gained from one initiative to refine another. As a result, organizations can make faster progress in developing capabilities and scaling initiatives: one discovered that several initiatives were based on the same natural-language-processing technology, allowing it to save time in future development of similar solutions.

Step 2. Assess capability needs and development methods

The archetype use cases described in the first step can guide decisions about the capabilities a company will need. For example, companies that focus on improving controls will need to build capabilities for anomaly detection. Companies struggling to migrate to digital channels may focus more heavily on language processing and text extraction.

As for how to build the required ML models, there are three primary options. Companies can:

  • build fully tailored models internally, devoting significant time and capital on bespoke solutions that will meet their unique needs;
  • take advantage of platform-based solutions using low- and no-code approaches; or
  • purchase point solutions for specific use cases, which is easier and faster but requires trade-offs.

Exhibit 2 shows a list of the advantages and disadvantages of each approach.

2
Machine-learning models can be built in three different ways depending on client context and situation.

Deciding among these options requires assessing a number of interrelated factors, including whether a particular set of data can be used in multiple areas and how ML models fit into broader efforts to automate processes. Applying ML in a basic transactional process—as in many back-office functions in banking—is a good way to make initial progress on automation, but it will likely not produce a sustainable competitive advantage. In this context, it is probably best to use platform-based solutions that leverage the capabilities of existing systems.

Step 3. Give models ‘on the job’ training

Operationalizing ML is data-centric—the main challenge isn’t identifying a sequence of steps to automate but finding quality data that the underlying algorithms can analyze and learn from. This can often be a question of data management and quality—for example, when companies have multiple legacy systems and data are not rigorously cleaned and maintained across the organization.

However, even if a company has high-quality data, it may not be able to use the data to train the ML model, particularly during the early stages of model design. Typically, deployments span three distinct, and sequential, environments: the developer environment, where systems are built and can be easily modified; a test environment (also known as user-acceptance testing, or UAT), where users can test system functionalities but the system can’t be modified; and, finally, the production environment, where the system is live and available at scale to end users.

Even though ML models can be trained in any of these environments, the production environment is generally optimal because it uses real-world data (Exhibit 3). However, not all data can be used in all three environments, particularly in highly regulated industries or those with significant privacy concerns.

3
Matching the right data set to the right production stage is critical for successful deployment of machine learning.

In a bank, for example, regulatory requirements mean that developers can’t “play around” in the development environment. At the same time, models won’t function properly if they’re trained on incorrect or artificial data. Even in industries subject to less stringent regulation, leaders have understandable concerns about letting an algorithm make decisions without human oversight.

To deal with this challenge, some leading organizations design the process in a way that allows a human review of ML model outputs (see sidebar “Data options for training a machine-learning model”). The model-development team sets a threshold of certainty for each decision and enables the machine to handle the process with full autonomy in any situation that exceeds that threshold. This human-in-the-loop approach gradually enabled a healthcare company to raise the accuracy of its model so that within three months, the proportion of cases resolved via straight-through processing rose from less than 40 percent to more than 80 percent.

Step 4. Standardize ML projects for deployment and scalability

Innovation—in applying ML or just about any other endeavor—requires experimentation. When researchers experiment, they have protocols in place to ensure that experiments can be reproduced and interpreted, and that failures can be explained. The same logic should be applied to ML. An organization should accumulate knowledge even when experiments fail.

The right guidance is usually specific to a particular organization, but best practices such as MLOps can help guide any organization through the process. MLOps refers to DevOps—the combination of software development and IT operations—as applied to machine learning and artificial intelligence. The approach aims to shorten the analytics development life cycle and increase model stability by automating repeatable steps in the workflows of software practitioners (including data engineers and data scientists).

Although MLOps practices can vary significantly, they typically involve a set of standardized and repeatable steps to help scale ML implementation across the enterprise, and they address all components needed to deliver successful models (Exhibits 4 and 5).

4
Achieving scale requires a standardized and repeatable approach to machine-learning operationalization.
5
Machine-learning operations covers all components needed to deliver models.

While standardizing delivery is helpful, organizations also need to address the people component—by assembling dedicated, cross-functional teams to embed ML into daily operations. Modifying organization structures and building new capabilities are both critical for large-scale adoption. The healthcare company built an ML model to screen up to 400,000 candidates each year. This meant recruiters no longer needed to sort through piles of applications, but it also required new capabilities to interpret model outputs and train the model over time on complex cases.

Virtually possible: Getting remote work right for G&A functions

Virtually possible: Getting remote work right for G&A functions

Adopting the right mindsets

ML has become an essential tool for companies to automate processes, and many companies are seeking to adopt algorithms widely. Yet the journey is difficult. The right mindsets matter.

The more data, the better. Unlike rule-based automation, which is highly centered around processes, ML is data-centric. A common refrain is that the three most important elements required for success are data, data, and more data.

Plan before doing. Excitement over ML’s promise can cause leaders to launch too many initiatives at once, spreading resources too thin. Because the ML journey contains so many challenges, it is essential to break it down into manageable steps. Think about archetypical use cases, development methods, and understand which capabilities are needed and how to scale them.

Think end to end. Asking managers of siloed functions to develop individual use cases can leave value on the table. It’s important to reimagine entire processes from beginning to end, breaking apart the way work is done today and redesigning the process in a way that’s more conducive to how machines and people work together.


There is a clear opportunity to use ML to automate processes, but companies can’t apply the approaches of the past. Instead, the four-step approach outlined here provides a road map for operationalizing ML at scale.

Explore a career with us