FRTB reloaded: Overhauling the trading-risk infrastructure

| Article

The Fundamental Review of the Trading Book (FRTB) introduces many new elements to Basel’s market-risk framework.1 Some of the most important include new methodologies and approaches—such as expected shortfall, a revised standardized approach to calculating capital requirements, and nonmodelable risk factors (NMRF)—as well as new processes and forms of governance (for example, the P&L attribution test and desk-level approvals). Banks are expending enormous effort to add these capabilities.

Less noticed are the implicit demands these changes make on the trading-risk infrastructure—the data and systems that support the enhanced methodologies and processes introduced by FRTB. Indeed, it might seem that FRTB asks banks only for some light housekeeping; the Basel paper barely mentions infrastructure per se. But the implications are actually enormous: at larger banks, what’s needed is nothing less than a fundamental overhaul. At smaller banks, the stakes are not as high, but these institutions also have work to do.

Throughout the industry, the trading-risk infrastructure is showing signs of strain in the face of FRTB compliance. In large measure, that’s because banks have underinvested in this area since the introduction of Basel 2.5 and haven’t always tackled the work strategically. Indeed, in a 2017 McKinsey survey of banks about their priorities for traded risk, banks put data quality and enhancements to the technology platform at the top of the list. One of the bigger issues that many banks seek to fix is the parallel yet misaligned risk and finance architectures (including different pricing or valuation models, market-data sources, and risk-factor granularity), which leads to contradictory and confusing results.

Recent quantitative-impact studies (QIS) by the Basel Committee and many banks’ own analyses on the new P&L attribution test show that more than 70 percent of the desks in a bank fail the test; that is, banks cannot adequately explain the P&L and its drivers. Or consider the large number of manual overrides needed to get the trade-population right, the onerous chore of risk-factor mapping, stale market data, missing reference data, and pricing-model breaks resulting from non-stress calibration: all are infrastructure challenges. Even before FRTB takes full effect, these and other challenges have led to poor backtesting results and further supervisory “add on” capital charges—for example, value-at-risk (VaR) multipliers greater than five—as outlined in a 2013 study by the Basel Committee.2

The banks that read between the lines of the original FRTB requirements and started to fix their infrastructure have a strategic advantage now. But the confirmed delay of FRTB implementation to January 1, 2022, has thrown other banks a lifeline (Exhibit 1).3 In our view, there is just enough time before the deadline to tackle the deeper challenges. Rather than coasting to the finish line, banks should focus on implementing FRTB in a smart way, including the broader strategic goal of upgrading the trading-risk infrastructure from front to back.

1
The delay of FRTB implementation to January 1, 2022, has thrown many banks a lifeline.

Banks that choose this path will capture benefits in capital efficiency, cost savings, and operational simplification. We believe that these benefits can mitigate the full extent of the reduction in banks’ ROE resulting from FRTB and other regulations—a reduction we estimate at three percentage points. In this paper, we will examine the business case for an infrastructure overhaul, including the core sources of efficiency and savings; the design principles of a best-in-class infrastructure; and the steps banks can take to implement these ideas.

Banks have been given a golden opportunity to get their trading houses in order and to set the stage for all the advanced technologies (robotic process automation, smart work flows, machine learning, and so on) that are so thoroughly remaking the industry.4

The case for investing in infrastructure

Compliance with FRTB is not the only reason to overhaul infrastructure, but it is a powerful one. A coherent front-to-back technical architecture and aligned organizational setup eliminate many sources of discrepancy among the business, risk, and finance views. With that, the chances of supervisory approval increase.

Take one example: the better alignment between front office and risk required under FRTB is impossible unless both share an efficient, consistent firmwide data infrastructure. Without it, banks cannot remediate discrepancies between risk’s P&L and the front office’s—for instance, the differences that arise in sensitivities, backtesting, and P&L attribution.

Just as important, an overhaul of the trading-risk infrastructure makes eminent sense from a business perspective. Key risk metrics, such as sensitivities, value at risk or expected shortfall, and risk-weighted assets (RWA), are not just technical or regulatory concepts but also the foundation of senior managers’ decision making. To produce reliable, fast, high-quality measurements (as specified in BCBS239), an institution needs reliable, high-quality data processed by the cogs of an efficient operating model. Only then can the bank truly know its complete risk profile and profitability, and execute its strategy with assurance.

Underlying both arguments—compliance and business—are the considerable benefits of consistency and efficiency.

Consistency through unique taxonomies

Consistency is paramount to establish trust and confidence in the metrics. Unique data taxonomies (or dictionaries or libraries) and a clear data model enable provenance and a clear data lineage for the whole front-to-back trading risk-data flow. “Golden sources”—single data sources for a certain data type, used as a reference in all downstream calculations across the bank—inspire confidence and provide accountability by ensuring that only one version of the truth exists for each data type in the bank. (Note that multiple databases can constitute such a golden source if they use the same data taxonomies and structure.) For example, using one source of market data for both risk and P&L calculations directly improves backtesting and P&L attribution results, and it can empower aligned measurements, erase operational risk in data reconciliation, and increase the quality and completeness of data.

Would you like to learn more about our Risk Practice?

Further, there must be a clear ownership and subscription model for specific data types, as well as adequate enforcement around it. In other words, ownership typically lies upstream, where the data are created or sourced, and downstream systems and users subscribe to the upstream golden sources.

The knock-on effects of unique taxonomies and golden sources extend to the broader organization. By standardizing risk factors and sensitivities throughout a firm, say, or by making universal use of the same pricing-model libraries, banks can move with greater confidence as they design new products or tie together different databases in search of new insights.

We see several examples of banks setting out to establish golden sources for market data and reference data, as well as a single pricing-model library, with significant cost savings and significant capital savings beyond that (Exhibit 2).

2
FRTB and other new rules will dent returns unless banks act.

Efficiency: Standardization, automation, and outsourcing

Efficiencies are always welcome, but especially now in view of the significantly higher computational capacity and storage needs of FRTB (such as a tenfold increase in the number of P&L vector calculations and the demands of desk-level reporting). Further, the benefits of consistency—the “goldenness” of the sources—are quickly lost if the infrastructure is not operating efficiently. Primarily, this creates a powerful bias for standardization and automation wherever possible. For example, banks need to standardize their risk-factor and reference-data taxonomies, so that they can easily use their golden sources without time-consuming mapping exercises. Standardization may also mean that banks need fewer vendor licenses and less maintenance and can free up staff and computational capacity. Automated data cleaning (potentially using advanced-analytics and machine-learning methods) and automated report production are further key drivers of efficiency, as they address some of the most resource-intensive activities.

Organizational efficiencies are available, too. For example, processes such as VaR and P&L production and reporting, as well as the development and validation of models, can be moved to shared service centers and centers of excellence.

Efficiency also comes from acknowledging that not everything can be done in-house. Outsourcing relevant business-as-usual processes and using products from vendors add value and help a bank to concentrate on building capabilities from within. Such processes include data sourcing and the cleaning of market and reference data; transaction-data pooling for NMRF; pricing and risk modeling; and the development, production, reporting, and validation of models. Efficiency is not a positive side effect but a design choice.

Sizing the opportunity

In a competitive and uncertain environment, capital efficiency and cost savings become significant drivers for boosting ROE. Both are powered by a revamped trading-risk infrastructure.

And both may be necessary to counter a likely decline in ROE due to FRTB. On average, the global industry’s ROE remained in the single digits in the last few years (8.6 percent in 2016); so did the ROE of the top ten global capital-markets players, at 9.7 percent. For the next few years, regulatory-capital constraints, many embodied in FRTB, are likely to keep pressure on profitability. The top ten capital-markets banks’ average ROE might fall by about 34 percent by 2022, mainly as a result of higher capital requirements (Exhibit 3). We estimate that, on average, the top ten global capital-markets banks will each have to reserve an additional $9 billion in capital, of which $4.5 billion results directly from FRTB.5 Diminished profits lead to strategic complications, not least a limit on the ability of banks to finance future growth. And revenue growth is slowing in many parts of the world.

3
Banks can design a streamlined infrastructure with golden sources.

Capital efficiency. McKinsey’s capital-management survey highlights the fact that banks, especially in Europe, have significant scope to improve the management of their balance sheets.6 Banks can use three sets of technical levers that, combined, could reduce RWAs by 10 to 15 percent:

  • Improve data quality and infrastructure. Effective data management can reduce capital charges, even in the standardized approach (STA). For example, banks can develop a comprehensive, relevant, and cross-cutting data model that considers issues such as product classification and segmentation and how to allocate positions to the relevant models, approaches, and risk-weight categories. They can identify gaps in the data and mitigate them by, say, checking the availability of historical market-data time series and sourcing all relevant external ratings. In fact, tapping the full range of external data sources (such as emerging trade repositories and industry utilities) is desirable to ensure comprehensive data sets. Finally, banks can enhance and validate their data through backfilling and thoughtful proxies for hard-to-find data.
  • Enhance processes. Many processes that figure in the calculation of capital requirements—such as hedging, netting, and collateral management—can be enhanced by, for example, ensuring full coverage and the timeliness and rigidity of the process, as well as by allowing only limited deviations. Further, the data process involved can be standardized and automated. Like cost efficiencies (mentioned previously), this approach can help capture capital efficiencies.
  • Carefully choose and parameterize models and methodologies. One core lever for capital efficiency (and accuracy in capturing the risk profile) is opting for the internal-model approach (IMA)—in particular, for products that are heavy RWA consumers. Indeed, the standardized approach often leads to more conservative capital charges and is more prescriptive, offering less flexibility for banks to optimize further. Recent QIS and banks’ internal analyses of FRTB’s impact show that use of the IMA leads to a 1.5-time increase in market-risk RWAs, versus 2.5 for STA. While impressive, this capital-efficiency gain must be weighed against the operational complexity and cost of implementing and maintaining IMA. The potential volatility in capital caused by switching from IMA to STA when certain desks fail P&L attribution tests is also an issue. Smaller banks, in particular, might make decisions about IMA different from those of larger banks. And those larger banks may carefully consider the portfolios or desks to place their bets for initial IMA approval—they should be clear winners.

Banks must build and enhance the models needed for FRTB, such as expected shortfall, default risk charge, and NMRFs. As they do, they should carefully consider the model type (for instance, the choice of full revaluation or the sensitivities-based approach), as well as the model’s underlying parameters, such as risk-factor coverage and assumptions about correlation and liquidity.

Risk factors are an area of special concern. FRTB introduces a steep capital charge for holding illiquid, NMRF-linked products, such as exotic currency pairs and small-cap single credit names. Risk factors such as these are defined by their frequency of observation; NMRFs have fewer than 24 observations a year, with no more than a 30-day gap between observations. NMRFs alone will boost market-risk capital by 35 percent, suggesting that there is material value for banks in demonstrating the observability of risk factors. Besides sourcing market data from vendors, exchanges, and trade repositories, banks can meet the observability criterion by pooling transaction data among themselves—for example, through an industry utility.

Cost savings. Reaching double-digit ROEs also depends on the cost savings delivered by a modern infrastructure. Typically, these range between 15 and 20 percent of the current infrastructure cost base, or $250 million to $350 million for an average top ten global capital-markets bank. (Such efficiencies are additions to the significant cost savings already achieved in the past few years.) Moreover, these cost-saving moves have significant synergies with the process optimization and standardization described earlier.

Cost savings can be achieved in three main ways. Start with the systems infrastructure, which often has duplicative elements, and the data. Banks can centralize unique data warehouses into golden sources, remove duplicative applications, and consolidate front-office risk calculation “engines” (and repurpose the hardware and people supporting them). We have seen examples of banks consolidating their fragmented landscape of about 40 or so front-office risk engines into fewer than five, with an immense impact on savings.

Standardization and automation, with their strong contributions to efficiency, play a role in cost savings. So does a better prioritization of activities, such as a hierarchy of needed reports. Banks can also streamline their outputs. Eliminating “nice to have” information makes reports simpler; consolidating risk reports to different recipients into one saves time and effort. Automation reduces manual work and improves effectiveness by significantly reducing the number of errors.

Third, banks can mutualize their costs. New platforms and industry utilities provide shared data—most prominently, market data and reference data—and reduce the cost of the common activities that all banks need to undertake but that don’t offer a competitive advantage to any.

A large European bank, which was particularly troubled by problems with duplicative applications and confusion among its data sources, recently put most of these capital-efficiency and cost-saving moves in play. It defined five initiatives. On the technology front, the bank reduced the number of applications and transferred production of some services to a shared service. On data, it worked to build golden sources. In risk and finance, it aligned governance and did technical work to bring finance’s P&L and risk’s exposure reports into alignment. It simplified its processes. Finally, the bank used demand management to lower the cost of new development (for instance, by asking users to prioritize new functionalities in risk applications) and the costs involved in the daily run of systems (reducing daily breaks, for example, and the associated cost of support and maintenance). Costs fell by more than 10 percent; regulatory delivery became faster; and the accuracy of information improved.

Building the new infrastructure

Taking these steps is of course challenging—and made harder by the scarcity of an implementation budget and other resources at banks that are having trouble generating profits. Nonetheless, having seen several banks successfully develop and execute programs to revamp the infrastructure, we identified five actions critical to their success.

Prioritize well

At a large bank, implementation expenses that include significant parts of these infrastructure changes will probably cost $100 million to $200 million. At the same time, banks will quickly start saving on capital and operational costs. Carefully weighing these benefits and expenses for each asset class, geography, and group of trading desks is a core lever to manage the scope, complexity, and cost of implementation.

Establish senior oversight

Leading banks have put in place a governance committee specifically for the front-to-back capital-markets infrastructure. This committee executes its core oversight responsibility by designing the strategic infrastructure, outlining and monitoring the transformation road map, overseeing progress made across infrastructure-transformation projects, and resolving any issues that might arise from conflicting requirements. Typically, such a committee includes the chief operating officers for capital markets, market/traded credit risk, and finance; senior managers of risk-data aggregation and risk reporting; and others as needed.

Exploit synergies with ongoing programs

Business and regulatory programs already under way might have different goals but often touch upon the same infrastructure. An example could be the program to develop Global Market Shock (GMS) loss forecasts, as required under the Comprehensive Capital Analysis and Review (CCAR). Other regulatory programs include the targeted review of internal models (TRIM), the European Banking Authority (EBA) Stress Test, the Markets in Financial Instruments Directive 2 (MiFID 2) for European banks, the guidelines for interest-rate risk in the banking book (IRRBB), the standardized approach for measuring counterparty credit-risk exposures (SA-CCR), and IFRS 9.

Banks usually try to manage these overlaps by putting in place alignment and feedback loops or by staffing programs with the same colleagues. In large organizations, this gets exceedingly difficult, particularly when programs are commissioned by different departments or located in different geographies. Banks should be on the lookout for synergies between FRTB and other ongoing regulatory programs and exploit these synergies in moving toward a more centralized infrastructure (including golden data sources, APIs to key calculation engines, and so on). In our experience, a productive approach toward a more centralized platform for traded risk starts with programs where significant overlap can expected, such as FRTB and CCAR GMS (Exhibit 4). By closely connecting infrastructures to comply with big regulatory programs, banks can derive significant efficiency benefits.

4
Banks can exploit synergies between FRTB and CCAR GMS.

Reconsider build or buy options

In response to FRTB, platform and data vendors have begun to offer infrastructure solutions, as well as components such as front-office risk engines, aggregation and reporting systems, and data-management platforms. With a broad range of solutions now commercially available, banks are in a comfortable position to investigate their buy-or-build trade-offs. They can then focus their implementation efforts on areas where in-house solutions are required to ensure flexibility or other desired characteristics. Many banks still think that certain parts of the infrastructure give them a competitive advantage. But as risk IT gets increasingly standardized, this argument makes less sense, and the option to buy becomes more attractive.

Secure talent

Given the extensive regulatory book of work at many banks, people with relevant capabilities are in high demand: everyone is looking for skilled analytics experts, data engineers, and IT developers, and for knowledgeable program managers. One solution is to rotate such people frequently across the bank. Another is to provide an inspiring atmosphere to attract and retain that talent. But there are more innovative approaches to talent management: collaboration with fintechs and other vendors may be one; another could be collaboration within the bank (for instance, by building joint advanced-analytics or data-analytics centers of competence). Banks should scout things out—for example, by joining communities where digital talent resides, such as conferences and online developer forums. In this way, banks put themselves right in front of the talent pool and can attract people to compelling jobs in banking-risk technology.


Time has a way of sneaking up on us. As one risk leader said recently, “FRTB forces us to do the housekeeping that we should have done years ago.” Every bank should take the message to heart and not wait until the next deadline rolls around.

Download FRTB reloaded: The need for a fundamental revamp of trading-risk infrastructure, the full report on which this article is based (PDF–4MB).

Explore a career with us