Pricing innovation and underwriting excellence has always provided a competitive advantage for insurers. Inflation, increasing claims, changes in driving patterns, and regulatory changes have added pressure for insurance companies to adjust their pricing processes and for practices to be more transparent, accurate, and fair. McKinsey spoke with partner Elena Pizzocaro and associate partner Priti Joseph to understand more about pricing and how to improve processes using machine learning.
McKinsey: Why is pricing agility critical in insurance, and what does it take to transform pricing structures?
Elena Pizzocaro: There is a clear need to update prices quickly to counter inflation as well as to detect and correct adverse loss trends to account for recent changes in portfolio shifts. Because the insurance pricing process has many intermediate steps, it can quickly become quite complex, with multiple variants. Insurers often take six months or more to update their pricing. To shorten reaction to competition, quickly detect loss trends, and update prices frequently, companies can focus on improving automation, building transparency in pricing processes, and developing a clear view of pricing profitability.
A full-scale pricing transformation focuses on building a foundation to achieve substantial and sustainable improvements across the entire pricing value chain, including underwriting strategies, risk selection, technical pricing, market-based pricing, and behavioral pricing, when regulations allow. Pricing execution and governance are equally important along with foundational capabilities such as data, organization, and talent, as well as advanced analytics and digital capabilities.
McKinsey: How can insurers address pricing agility?
Priti Joseph: Traditionally, linear models were widely used for pricing purposes and involved 20 to 40 variables. Nowadays, more and more insurance carriers are exploring machine-learning models to improve pricing accuracy and differentiate between the severity of risks or quickly identify and test new data sources.
In terms of improving pricing agility, it’s important to first understand the end-to-end pricing process and how results from one pricing process affect others. Knowing this process helps identify opportunities to simplify and streamline operations. Second, creating dashboards to monitor business performance at a granular level is important to understand how the book is performing in terms of premiums and loss ratios. For example, insurers can automate the detection of emerging loss trends and risk shifts in the portfolio.
Once they identify what needs to be corrected in pricing, insurance carriers need to have the infrastructure, operating model, and rigor to quickly update and deploy new pricing models. They can do this easily by leveraging automation and building cross-functional teams that work on driving these outcomes. For instance, teams can automate and merge data pipelines for policy and claims. They can also refresh pricing models automatically by integrating them with external data, and create automated reports that help explain the pricing models to customers. When building out tooling and technology, however, insurers should have the strategic pricing architecture in mind to avoid technical debt.
McKinsey: How can insurers achieve pricing sophistication while keeping explainability in mind?
Elena Pizzocaro: As insurers build and deploy new pricing models, they need to also develop easy ways to access and understand the models, even from a regulator’s point of view. They can do this by offering the actuarial interpretation of the pricing models, such as how each variable contributes to the final price; using dislocation analysis to determine which subset of clients will be affected by the price change; understanding shifts in prices for segments in new pricing models compared to old ones; and using state-of-the-art methods such as explainable AI [XAI], which uses machine learning to produce more comprehendible programs for clients and provides insight into how certain predictions or outputs are reached.
McKinsey: What else should insurance carriers consider when it comes to XAI or explainability?
Priti Joseph: Insurance carriers need to ensure AI models are fair and unbiased and that they use quality data that adheres to privacy laws and regional regulations. Insurers should start with good-quality data and implement automated checks that detect bias in the data, such as unequal outcomes for protected classes. They can also monitor model degradation automatically with an emphasis on traceability, discoverability of data, and the quality of documentation. Business line oversight and model risk management should be considered in these efforts above all. Insurers should also build models that can identify causally motivated relationships, not just correlations. Having this transparent view helps veteran actuaries and subject matter experts, who may not be as experienced in recent techniques, collaborate and provide input easier. Finally, insurers should establish a collaborative model between developers and live operations to minimize any data reconciliation in production.
Elena Pizzocaro is a partner in McKinsey’s Milan office, and Priti Joseph is an associate partner in the London office.
For more on pricing: