Managing data as an asset: An interview with the CEO of Informatica

As CEO of Informatica—one of the world’s largest providers of cloud-based services for managing data across multiple environments, supporting analytics programs, and achieving compliance with data regulations—Anil Chakravarthy sees how companies in every industry use data to make better business decisions. What distinguishes the most successful businesses, in his view, is that they have developed the ability to manage data as an asset across the whole enterprise. That ability depends on certain supporting elements: a strong technical foundation, mechanisms to govern the handling of data, and employee accountability for managing data well. In this interview with McKinsey partner Roger Roberts, Chakravarthy explains why these elements matter and offers examples of how they have helped companies use data in support of their business objectives. An edited version of his remarks follows.

McKinsey: In your experience running a data-management company, how do businesses use data to create value consistently?

Anil Chakravarthy: The most value comes from being able to collect and correlate information from different kinds of systems. For example, a major oil company that we work with historically used traditional data models and databases to determine things like the profitability of an oil well. They would start by collecting data about an oil well. What are my costs? How much production do I have? Then they could ask, If the oil price hits X, will this oil well be productive or not? That was a traditional way of doing an analysis on an oil well.

Now through IoT [the Internet of Things], they have a lot more data on actual productivity in terms of things like output and maintenance status. And they’re correlating that data. So now they develop predictive analyses that identify the most efficiently run or most effectively run oil wells, or the most profitable oil wells. Then they can make real-time decisions.

We see this in industry after industry. It’s being able to take the data that a company traditionally had, which dealt with things like profitability, cost, expense, et cetera, and combine it with more IoT-based data on efficiency, maintenance, status, and so on.

McKinsey: What are the major challenges that companies encounter as they integrate data from different systems?

Anil Chakravarthy: What’s causing the biggest pain right now is how to do this at an enterprise-wide scale. Fundamentally, the way data is designed, collected, and stored has not changed from how it was being done ten, 20, or 30 years ago. Data is developed in the context of a specific business initiative or a specific application. Companies still optimize the ways they collect and design the data for a single business initiative.

But suppose you want to do something else with the same data. At a bank, data might have been collected for a mortgage-application system that was built 25 years ago. But now they want to use that data in a different context, and so they have to collect the data, cleanse the data, and govern the data differently. Once I give my data to another business unit, what are they going do with it? Are they going start calling my customers? What happens then? People get possessive of their data, and they’re not motivated to share it. That’s a basic organizational barrier that must be overcome.

You also have a lot of technical barriers. What format is that data in? What database did I use? Was that data encrypted or not encrypted? In addition, the original application or business system that was using the data might have had a certain logic built into it. If I’m giving you the data without the business logic, will that data still be useful and still make sense in a new context?

McKinsey: How are companies changing their approach to data management so it works effectively across the enterprise?

Anil Chakravarthy: In the past, every business function, every application created its own data model and its own data repository. That led to this huge proliferation of data. Now there are so many things being done to make sense of the data after the fact. The big change now is, How do you design that capability in from the start?

It’s kind of like what we saw in manufacturing a few years ago. Kaizen and some of these techniques came in because managers realized that it’s really expensive to fix defects after products have been made, and especially if products have already been shipped. It’s much more efficient and effective to fix defects close to the point of production or design. That’s exactly what we’re seeing now in the world of data.

For instance, imagine you want to create a new customer-data repository for experience, engagement, and so on. Instead of taking an approach where you bring all the data from wherever it is, cobble it together, and launch something, enterprises are taking a step back and saying, “No, let’s first do a data catalog: identify what data we have, what data is higher quality versus lower quality, what data is sensitive versus not, what data is from a system of record versus other sources, and so on.” Once you have that map, you can design and build a new platform to be extensible and to support multiple initiatives and use cases for customer data.

That’s the big difference we’re seeing: it’s taking a step back, understanding how data needs to be gathered and managed, and designing that into the system from the get-go.

McKinsey: How does an effort like that fit into a wider digital-transformation program?

Anil Chakravarthy: Data should support the many initiatives that are typically part of the digital transformation. For example, digital transformation usually involves the use of next-generation analytics platforms. How do I make analytics available to all the key people within the company, so they can develop predictive insights and so on? If you want to have that kind of widespread, next-generation analytics available, you need a data platform that can support that.

Another common example is a 360-degree view of customers. A lot of companies are interested in how they can really improve customer experience and customer engagement. Most previous-generation systems were built for transactions. At a bank, those systems were built for checking-account transactions or mortgage transactions. They were not built for experience. Changing that requires combining data related to both transactions and all the interactions with customers.

So that’s why you need a data platform: to support the typical initiatives associated with digital transformations. Ultimately, data becomes the fuel that helps power multiple use cases or opportunities that the business may want to go after as part of the transformation. And so you have to do a data transformation to enable that digital transformation.

Ultimately, data becomes the fuel that helps power multiple use cases or opportunities that the business may want to go after as part of [a digital] transformation.

McKinsey: What does the data transformation involve at a technical level?

Anil Chakravarthy: For most companies, the traditional approach to managing IT has been to build a budget around big application projects. Most customers are realizing they need to go to a more agile model, where the applications they develop are modular; they’re smaller. That move toward an agile model is really helped by having a data platform that can support different applications. Once you build an independent data platform, you can make application development much more agile.

The platform has to be metadata based so you can actually understand and have a true catalog of data. It doesn’t have to store all the data. It’s just a place where the data gets processed into the right applications. That creates an abstraction layer.

Think of the supply of data coming from back-end systems and older systems, which can only move so fast. And the consumption of data is changing much more rapidly. By creating an abstraction layer through the data platform, you are enabling new applications to move much faster without having to create point-to-point connections.

McKinsey: Whose job is it to lead the data transformation?

Anil Chakravarthy: Companies typically start by launching a big digital-transformation initiative. Usually that involves designating a chief digital officer. The chief digital officer would be responsible for identifying the key processes that the company will transform with the new digital technologies. The chief digital officer then works with someone like a chief data officer.

Now, the chief data officer might be appointed for a couple of different reasons. Some companies need a chief data officer because of regulatory compliance. A lot of companies have one because they want to build a data platform that lets them bring in the data from a variety of different systems to power their digital transformation. The chief data officer plays a role in teaching everyone in the company how to work with data.

McKinsey: What are some other organizational aspects of this transformed approach to working with data?

Anil Chakravarthy: I think the best companies are treating data as a strategic asset that everyone has to manage well. When it comes to managing money, that’s not just the CFO’s problem. It’s everyone’s job to use the company’s resources effectively. Same thing with attracting, retaining, and developing people—that’s not just the CHRO’s [chief HR officer’s] problem. I think that is starting to happen with data. People are recognizing that it’s not their data. It’s the company’s data. So it starts with building that mind-set, starting with the tone at the top.

Once you get the right culture, then the company can start to think about how it manages data, so people can do their work and optimize for their priorities while, at the same time, balancing the needs of the company for the future.

How do you strike the right balance? The answer is different for every company, but that’s where the chief data officer plays a role in saying, “This is where you have a lot of autonomy and make your own decisions. And here’s where you need to play along with how we’re trying to treat data as an asset for the entire company.”

McKinsey: How does that change the way employees manage data on a day-to-day basis?

Anil Chakravarthy: There’s a huge change in mind-set from a centralized, after-the-fact approach to data governance and data quality, to a collaborative approach where you try to do it right from the start.

In the past, you would build a data warehouse. You’d put all the data into the warehouse and set up a team of data-governance experts or data-quality people to sample and check records and determine whether the information is complete and consistent. That approach simply does not scale, especially when we’re talking about the kinds of data volumes that we have now.

The current approach is to make data governance and quality a small part of the job of the many people across the company who are the closest to the data and understand it in the context of the business. It’s done before the data is collected and processed. There is also a process in place where if you still have data that is not the highest quality, it gets cleaned up on an iterative, constant basis rather than after the fact.

Fueling growth through data monetization

Fueling growth through data monetization

McKinsey: Do you see a role for artificial intelligence [AI] in ensuring the quality of data?

Anil Chakravarthy: Absolutely, and it’s already happening. There are new techniques around, for instance, identifying sensitive data. The General Data Protection Regulation [GDPR] says that if you have data relating to European customers or European employees, that data needs to be handled in a certain manner. You need to know where you store data related to European customers and identify databases where that data is kept.

A lot of AI and machine-learning [ML] techniques are being used to tackle those kinds of problems. The results might not be 100 percent correct, just like the output of any other AI or ML system. But even if it’s 90 percent right, the other 10 percent can come from a human expert who looks at the output and confirms that the database is secured with GDPR in mind.

It’s hard to do the entire job with people because these tasks are extremely repetitive. Even if you could get somebody to do it for the first ten databases, if you then said, “OK, now do it for the next thousand databases,” it’s just not practical. So I think it’s a task that’s much better done by a software robot, to automate as much of this tedious work as possible, with humans to handle the exceptions that require more judgment.

Explore a career with us