Digital performance management: From the front line to the bottom line

| Interview

For the past several years, manufacturers around the world have begun devoting substantial resources to join the Fourth Industrial Revolution (4IR). Industry leaders are starting to see transformational impact not just in individual assets, production lines, or sites, but across the entire end-to-end value chain. They’re now looking to build on their leads.

Sustaining these gains, however, will require most manufacturers—even ones famed for operational excellence—to overcome longstanding barriers within their organizations, whether between vertical layers of hierarchy or between horizontal silos of data. For example, few companies today can translate their operational performance into financial terms, except in rudimentary, backward-looking ways. If a supply-chain interruption forces a few factory lines to shut down for several days, the effects will show up in periodic reports weeks (or even months) later, far too late to aid decision-making.

Only at the most advanced organizations do managers have an integrated, real-time data picture so that when an interruption occurs, they can be reasonably confident in prioritizing their actions among production lines to minimize financial damage. And even fewer organizations can translate what frontline workers know about the problems in their lines into the data that senior leaders need to make strategic decisions that affect prioritization choices.

That’s the promise of enterprise-level digital performance management, which extends 4IR technologies to provide the entire operations organization, from senior leaders to thousands of frontline workers, with actionable insights that enable faster, more accurate decisions about financial and operational performance. The objective: a single system that supports not only the performance-management reporting cycles that the top team needs in reevaluating strategy, but also the agile problem-solving systems that frontline workers use to identify plant and network-wide constraints, perform root-cause analysis, and ensure corrective actions are taken on the most important opportunities.

To develop a perspective on the implications of this breakthrough for manufacturers across industries, McKinsey’s Mike Coxon, a partner in the Cleveland office, and Christian Johnson, a senior editor in Hong Kong, spoke with three executives at the manufacturing software provider PTC: Howard Heppelmann, divisional vice president and general manager for smart connected operations; Craig Melrose, executive vice president for digital transformation solutions; and James Zhang, vice president for market development. Their discussion has been edited for brevity and clarity.

McKinsey: Companies have been trying to link their operations with technology for years—decades, arguably—so that the front line and the front office could act on the same data and make better decisions. What is different now?

Howard Heppelmann: A big part of the difference is simply in what the technology can do to bridge gaps that previously looked insurmountable. Historically, information technology (IT) (which underlies business systems and finance departments) and operational technology (OT) (which powers manufacturing) were mostly separate. Now there are IT–OT convergence technologies that unify business systems and operational systems.

Also, the longstanding belief was that to get something new, you had to discard and replace something you already had. Given existing investments in factory infrastructure, there isn’t an appetite (or, in many cases, even a possibility) to rip out what’s there and replace it with a single system.

With modern Industry 4.0 technologies, there isn’t a need to. Instead, you build on what you already have—your “brownfield” production networks—combining disparate IT and OT data sources, then homogenizing and normalizing the data to generate digitally charged operational insights and transform processes.

James Zhang: Rip-and-rebuild wasn’t scalable. Picture a global health-products manufacturer with roughly 100 production sites. At just one factory, installing a state-of-the-art traditional industrial software stack, including manufacturing-execution system (MES) and control and supervision software, was going to cost about $10 million and take 18 months. And that’s only part of the investment they would need. If you’re the head of manufacturing, you’re going to want a different path forward.

Craig Melrose: A leadership team will be tempted to say, “Let’s replace a system that does X with one that does half X and half Y,” and think that compromise will effectively balance cost against value generation. But what you usually end up with is a system that does neither X nor Y well.

McKinsey: That sounds like a gap in understanding the root cause of the problem.

Howard Heppelmann: Yes, but we see this rapidly changing as companies rethink their IT architectures, and agile improvement methodologies become more widespread. Many of the cultural barriers in manufacturing are beginning to soften.

However, that’s a new way of thinking enabled by Industry 4.0. It runs into the reality that most production networks still operate over a patchwork of siloed data systems, and rely on manual processes to unify the enterprise.

James Zhang: Companies struggle in accommodating a wide range of operating machines that are quite different from one another, and that need to be linked together with IT systems. The beauty of Industry 4.0, and its related digital technologies, is that they’re designed to address this exact challenge.

McKinsey: What does this look like in practice?

Howard Heppelmann: Let’s give an example. At a fast-moving-consumer-goods manufacturer, the sites implementing this system are now able to integrate financial data and performance data into unified, standardized applications showing exactly how much money each plant is making, down to the level of a single production line—in real time. That’s despite the fact that the plants’ IT and OT back ends are quite different; metrics are now uniform, and the data are therefore comparable.

Because the data are standardized and normalized, internal benchmarking becomes much more powerful. Managers can see that plant X performs better than plant Y and can start to examine why.

Craig Melrose: The inflection point happens when the culture embraces these changes. Senior executives can compare operations and make strategic decisions. Middle managers can make better choices to achieve higher productivity. Frontline operators can solve problems in real time and course-correct during the same shift.

Because the data are standardized and normalized, internal benchmarking becomes much more powerful.

Howard Heppelmann

McKinsey: What characteristics do the organizations that are achieving these types of results share?

Howard Heppelmann: The companies that have managed to break through are the ones that have figured out the connections between use cases and P&L impact, so that they’re applying technology to the most critical constraints of their production network. It’s a use-cases-first approach.

How great supply-chain organizations work

How great supply-chain organizations work

McKinsey: That’s a consistent theme: use cases first, technology second.

James Zhang: An industrial-equipment manufacturer illustrates this point well. Its top problem was unplanned machine downtime, which translated into significant cost and quality issues. The first step was to identify use cases. Rather than start with dozens, as is typical, it started with identifying and prioritizing use cases to address the most common (and highest-impact) business problems across its production network. It rolled out just four common use cases at the first plant it targeted, which proved so effective that they are being rolled out across the entire factory network to help manage inventory, asset performance, energy consumption, and quality.

Actionable insights into machine performance, people’s behavior, and process efficiency now empower managers to continuously optimize production. Improvements at one site can be replicated easily network-wide. Work-in-process has fallen by more than 15 percent, unplanned downtime by one-quarter, and annual energy savings are expected to be more than $10 million at enterprise scale.

Craig Melrose: And the savings the company has earned are now going into product redesign. At the same time, the company is building on its wins, expanding its library of use cases across the network. It’s creating a continuous-improvement feedback loop.

Improvements at one site can be replicated easily network-wide.

James Zhang

McKinsey: Now that at least some companies are starting to achieve scale, what do you see as the next big opportunity in data and operations?

Howard Heppelmann: To me, the single biggest gap is translating the operational outcomes that the digital-transformation teams are targeting into financial outcomes that the C-suite can understand. When that link is missing, there’s an understandable reluctance to move forward: if finance has only a bleacher view of what’s happening in operations, it holds back.

For transformation leaders, a crucial early goal is to deliver a first proof of value that’s significant enough for the C-suite to say, “This is the one thing to focus on. Drop everything else.”

Craig Melrose: This is important from another perspective as well. Today, companies spend more time on trying to find problems than on trying to fix them. By creating a new dynamic of fixing rather than just finding problems, the IT-OT link can create massive value.

A big part of the problem is human. Even now, within most big organizations, industry verticals aren’t sharing data with one another, or with functions. These technologies can create a “digital thread” so that information can be shared and stitched together with total transparency by almost anyone in the company to fit the problems they need to solve. That helps organizations move focus and resources from finding to fixing.

McKinsey: You’ve mentioned the potential from applying these technologies at every level of the organization. But mid-level managers face a challenging role, one that is becoming more difficult. How do you win them over?

James Zhang: Middle managers typically start off being skeptical: “We’ve deployed OEE tools and a variety of other systems that didn’t achieve the impact we planned. So why should we get behind this approach?” They need to see for themselves how it could help them solve chronic, complex problems, especially ones that cut across multiple parts of the business.

That was the situation an industrial-services company. The top team hosted a workshop that asked the managers one question: What is your biggest operational problem? In listening to their peers, the managers recognized that they shared many of the same problems—none of them had just one big problem, they had several. They didn’t know what they didn’t know.

While the company’s current systems could help identify an individual problem, they provided little direction in finding a solution when and where the problem occurred—especially if other parts of the organization needed to be involved. By providing transparent, real-time data across the organization, digital performance management could fill the gaps. By the end of the workshop, the managers were setting out ambitious plans for how digital performance management could improve their operations.

Howard Heppelmann: There’s also the role of the workforce to consider. Often there’s a sense that the systems and tools that help the C-suite and enterprise-level operations execs understand and measure performance don’t work hand-in-glove with what front-line users need in their problem-solving. The frontline wants help to get their jobs done, and that calls for a system that unifies the data set and problem solving for everyone.

McKinsey: What’s the major obstacle?

Howard Heppelmann: The as-is scenario we consistently hear is, “Our data’s out of date, we can’t identify the most critical opportunities for problem solving, and we are reactive, responding to data from the past, rather than the present.” A company may have a lot of automation capability, but no ability to aggregate data to understand real-time and forward-trending performance at a line, within a plan, or at the network level. If all you’ve done is parse data out to a data lake, at best you’re relying on outdated information and still have no ability to drill in to identify and solve root-cause issues.

So, the data may be available but not in real time. Add in hundreds of overlapping metrics and it’s easy for people to game the system; they can choose to highlight the data that looks best for them after the fact. Once you move to universal, real-time data and visibility, analytics can make it forward-looking rather than backward looking. Managers, frontline workers, and corporate manufacturing executives share a common transparency, and all are empowered with the visibility and capability to dig deep into plant operations to find root causes of problems.

That’s the breakthrough: a single real-time source of truth across the global production network seen through the lens of what matters most to each person and each role as they work to achieve manufacturing excellence.

Craig Melrose

McKinsey: That sort of transparency can be a threat, no?

Howard Heppelmann: If enterprise performance management becomes a performance-reporting tool, there will naturally be a lot of resistance. But if it’s treated and communicated as a continuous problem-solving tool that empowers operators with the data they need to do a better job, while giving the C-Suite with visibility to where they should focus resources to help front-line operators address key challenges, resistance can be replaced with acceptance and enthusiasm.

Right now, one of the disconnects in manufacturing is that there’s typically a reporting tool for executives to look at highly aggregated performance metrics, while operators solve problems using completely separately tools, and often separate data. Merging those two streams together creates value up and down the chain of authority. Operators get forward-looking insights that help in problem solving, so they look good. The consistency of the data then rises throughout the organization: the problem solved at the front line becomes performance improvement viewed by the C-suite on the bottom line.

Craig Melrose: This is important to get right, because you want to avoid micromanagement. At one company that implemented this type of system, the BU president started to call down to the frontline to solve problems. That’s not a good use of a BU president’s time, and likely gets in the way of the frontline workers, who are better positioned to find the real source of any problem they’re experiencing.

Instead, the idea is that same system draws from same real-time data at every level of the organization, but for different purposes. Different pulls, different people, different decisions. That’s the breakthrough: a single real-time source of truth across the global production network seen through the lens of what matters most to each person and each role as they work to achieve manufacturing excellence. Today, many companies have only half of what’s needed: either they have good systems up top or at bottom, but not both, not standard, not universal, and not connected.

Howard Heppelmann: This is the point. If a company has a great set of tools that only address the top tier, it’s already alienating a large group of people whose support it needs in order to make adoption happen: the plant managers and operators. They aren’t going to adopt a system if it’s seen only as a mechanism for reporting on them, rather than something helps them do their jobs.

Craig Melrose: What you’re really doing is empowering everyone in the organization at their level of influence. So, taking decision-making out of the hands of a couple hundred and putting into the hands of 10,000 can feel scary. But when the right guardrails and shared transparency is in place, this unlocks a transformational multiplier.

Explore a career with us