What’s the value of a better model? More than you may think

Institutions often invest considerable resources to improve their predictive models with the aim of gleaning better insights into their customers and operations. The benefits are obvious: better models should lead to better insights and, ultimately, greater revenue, efficiency, and market share. Interest in model improvement has intensified in recent years because of new sources of big data, as well as more sophisticated analytics such as machine learning and artificial intelligence. Given this heightened interest, it is a good time to raise the natural question: What is the actual value of a better model?

As an example, consider the value generated by improving a credit-decisioning model for a bank. A well-functioning model should distinguish creditworthy customers from those that are credit risks. Improving the model should lower expected credit losses. Moreover, a more-predictive credit-decisioning model can identify a greater number of customers within a bank’s specified risk tolerance, which should expand revenues.

Our research finds that for each $5 billion in credit balances a bank originates, an increase of just one percentage point in the predictive power of a credit model could reduce losses by up to $10 million within the first year alone. Moreover, that saving is available regardless of the current state of a bank’s models. It is just as valuable to fine-tune a high-performing model (though more difficult to achieve) as it is to remediate a weaker one.

We simulated how the loss rate of a credit-card portfolio varied as performance of the credit-decisioning model changed. We created a large random sample of hypothetical customers applying for credit, with a representative distribution of FICO scores from the US population. We applied hundreds of different credit-decisioning models with known predictive power to each customer application. Each model had been tested and achieved a Gini coefficient (a common measure of predictive performance) ranging from 1.0 (a model that flawlessly assesses a customer’s probability of default) to 0.0 (a model with no predictive power whatsoever). Every hypothetical customer was approved only when the model’s predicted probability of default was less than 10 percent; the rest were rejected. We then calculated the expected loss and number of customers in the resulting portfolio.

A summary of the results is shown in the exhibit. At left is a plot of the expected default rate for the underwritten portfolio as a function of credit-decisioning-model performance. The trend is relatively linear, with average default rate falling by about 20 bps for each one percentage point increase in predictive performance. At right, the exhibit shows the percentage of customers within the risk tolerance of less than 10 percent probability of default as predictive performance rises. For each gain of one percentage point of predictive performance, the total percentage of suitable customers increases by about 0.5 percent.

Better credit-model performance leads to lower expected losses and more customers.
Better credit-model performance leads to lower expected losses and more customers.

How difficult is it for banks to achieve a one percentage point improvement in a credit model’s predictive power? In our experience, it is typically within reach. In fact, additional data and more sophisticated analytical techniques can often yield an increase of several percentage points. For a top-ten US credit-card issuer with a portfolio of approximately $50 billion, this could translate into tens of millions of dollars in reduced credit losses.

Further, an improved model will create considerable headroom to capture more market share. Even high-performing models correctly identify only about 75 to 80 percent of the true population within the bank’s risk tolerance. Furthermore, the relationship is not linear; the incremental benefit increases faster than the increases in model performance.

While it isn’t always easy to improve the predictive performance of models, this example highlights that it can be worth it. Advances in analytics such as machine learning and artificial intelligence as well as the availability of new sources and types of data provide companies with more opportunity than ever before to improve the predictive performance of their models. The key is to ensure that scarce analytical talent is focused on the right problems where these performance gains will actually translate to real business value.

Bryan Richardson is a consultant in McKinsey’s Vancouver office, and Derek Waldron is a partner in the New York office.