At many financial institutions (FIs) the end-to-end model life-cycle environment—encompassing model development, validation, and monitoring—is plagued by inefficiencies, inconsistencies, lack of transparency, and poor controls that frequently slow the response to competitive challenges and regulatory requests. The duress brought on by the COVID-19 pandemic has put an even brighter light on these challenges in the United States and Europe, and many companies have accelerated their rush to revamp their infrastructure across the end-to-end model life cycle.
Unfortunately, massive investments typically go toward deploying more resources within the existing operating model, which creates an even bigger logjam and fails to unlock the vast efficiency gains that are possible—and necessary. To break this vicious cycle, FIs need to fundamentally rethink their approach across all model life-cycle stages.
That means taking a more integrated, strategic approach to managing the model life cycle, starting with an end-to-end restructuring of automation across activities—development, validation, and other ongoing functions (such as monitoring and periodic validation)—in a cost-efficient manner. Infrastructure, a core foundation of any such transformation, should be more robust, easy to use, and comprehensive to avoid the bottlenecks that slow responsiveness between model development and validation. We have observed that FIs at the forefront of these efforts often begin with three steps: define the target state, identify the current pain points, and outline design principles to implement a robust infrastructure.
Define the target state
Many banks wrestle with how best to tackle the daunting task of streamlining and partially automating model development workflows across their domains. To begin, we recommend reviewing all current models and prioritizing areas to automate. Prioritizing automation typically involves three key considerations:
- prioritize activities that maximize the impact of automation (for example, by including models that need frequent validation or monitoring)
- deprioritize models that require frequent changes in structure or design
- identify opportunities to scale by analyzing similarities in methodologies across models (for example, by implementing automation tools that can be applied to many models with minimal customization)
While prioritizing automation initiatives, the bank should also assess its model life-cycle infrastructure, specifically if it’s too decentralized. Most companies fall into one of three infrastructure archetypes (exhibit).
The first group has a completely decentralized infrastructure fragmented across multiple model development and model validation teams. These companies apply their automation tools inconsistently across the model life-cycle environment, which hurts efficiency.
A second group of companies has reduced some of this fragmentation by creating a centralized development infrastructure and a separate centralized validation infrastructure. This is an improvement, but the two teams are still prone to miscommunication and often duplicate testing.
Finally, some companies have a fully centralized model development and model validation infrastructure. In our experience, leading institutions are moving toward this operating model. They have a central model inventory and workflow-management tool for development and validation, and approved testing across the model life-cycle stages. Deep domain expertise within specialized teams drives the development and validation of models, while a centralized infrastructure increases synergies. Based upon McKinsey client use cases in the United States and Europe, this can speed up initial model validation by 30 percent (for example, by automating the documentation of test results of modeling techniques) and expedite periodic validation by 50 to 60 percent (for example, by fully automating model replication and testing). In addition, this type of infrastructure helps to optimize interactions between key stakeholders (model development, validation, and users) and improve effectiveness (for example, increasing consistency across activities and reducing the risk of errors).
Identify the pain points
Institutions also need to rally model risk management (MRM) teams to identify pain points across the model life cycle and make certain that the new model life-cycle infrastructure will address such paint points, which, in our experience, fall into five categories.
- Inefficient activities. When documents and codes are neither centralized nor easily accessible, companies are forced to continually repeat manual steps for model development, testing, and validation and will frequently redevelop similar models (for example, in the credit-risk domain).
- Inconsistent activities. In a fragmented model life-cycle landscape, coding standards, testing rigor, and the quality of documentation will vary. Moreover, these inconsistent activities tend to drive greater and greater inconsistency over time as individual MRM and development teams continue to pursue their own manual ways of doing business and embed their own legacy problems in their operating models and model structures.
- Lack of transparency. The fragmented environment and the lack of a control tower to centrally collect and measure key performance indicators (KPIs) and key risk indicators (KRIs) make it next to impossible to get a complete picture of the model landscape across the organization and its evolution over time.
- Lack of controls. With a decentralized infrastructure, it’s often very hard to maintain version control and reproducibility with exact code, data sets, and quality assurance to conduct and support development and testing.
- No audit trail. Creating an audit trail is difficult if automation tools are continually tailored to specific models and when innovation work on one model cannot be scaled across related models.
Outline the design principles
Once the company has defined a target state that can address these pain points, it should begin to outline the design principles of the new infrastructure: efficiency, flexibility, consistency, transparency, and controls.
To embed greater efficiency, companies should automate and centralize model testing and documentation, such as text and test results. They could take the opportunity to implement the FI’s standards into infrastructure, such as the MRM’s requirements for model performance testing, documentation style, and commentary. We expect that AI will play an increasing role in driving efficiency—although such techniques must be implemented carefully to avoid perpetuating modeling errors or bias in modeling.
We expect that AI will play an increasing role in driving efficiency—although such techniques must be implemented carefully to avoid perpetuating modeling errors or bias in modeling.
To improve flexibility, the bank’s workflow manager should use a standardized language code and data repository so the automated workflows can be easily reproduced and audited. FIs could also make use of metadata and labels to more easily navigate through the common functions, definitions, codes, and documentation found throughout the infrastructure. By setting up dedicated automated pipelines, the bank could begin to introduce code, data, and documentation for any point of the model life cycle. By putting telemetry agents in place, sensors could provide insights on model performance and facilitate bidirectional interaction. So, for example, a user could receive standardized information on model performance from the sensor while in parallel performing ad hoc analysis/injections to the code.
To bring greater consistency across the model life-cycle environment, the FI should centralize testing and enforce the use of standardized documents for all models. This consistency is buttressed by the telemetry agents, which can collect and analyze agreed-upon metrics. By displaying these KPIs and KRIs on a dashboard for all credentialed users, the MRM becomes transparent: stakeholders have full visibility and can stay up to date on model development and validation.
Finally, the design for the model life-cycle infrastructure should include controls and auditing capabilities. For example, it’s important to enforce “rules versioning” throughout the model life-cycle process. And, as noted above, it’s easier to produce an audit trail when the company’s automated workflow manager uses a standardized language code and data repository.
A lightweight technology footprint is necessary to ensure that the implementation of such a framework can be carried out quickly (ideally, a matter of weeks) without a long-winded and risky overhaul of the system environment. By lightweight we mean a nonobtrusive, open-source, low-cost infrastructure that requires minimal maintenance, and which is easily adaptable to any system and architecture.
At this point, the FI should also assess its internal capabilities and whether the talent exists in-house to build and operate the target-state infrastructure. If not, it needs a plan to acquire those capabilities. Most FIs will find that they need a range of new capabilities, people with specific technology skills and knowledge of automation (for example, robotic-process automation). Moreover, senior leaders need to keep the MRM team involved in these plans. Successful adoption hinges on broad support and enough comfort with the new infrastructure to use these tools every day in the natural course of business for model development, validation, and MRM.
Given the complexities of the global marketplace, it is critical that FIs improve the management of their model life cycle to improve efficiencies and controls. While COVID-19 has accelerated the timeline for this revamp and created heightened urgency, success will depend on coordinating efforts broadly across the bank. By taking a more integrated, strategic approach to the management of the model life cycle, banks can unlock massive model development and validation potential.