Research and reports

Home / Insights / Research and reports
Insights, Research and reports

Interactive survey: Broker scorecarding in 2019

Comparing your brokers and their execution strategies can be incredibly insightful – and of immeasurable value to your business.

We reached out to our buy-side clients to learn more about the broker review landscape in 2019. Over two months we carried out face-to-face interviews with professionals from 22 buy-side firms, with a particular focus on why firms carry out broker reviews, current processes, challenges faced and plans for enhancement.

Our latest report highlights the key results and trends from our surveys – to give you a better gauge of where you fit into the market, and what else you can be doing to benefit your business that you may not have already considered.

What you can learn:

  • Understand the drivers behind broker reviews.
  • Learn how often other firms carry out reviews.
  • Understand how – and with who – firms perform their reviews.
  • Gain insight into what goes into reviews and how they work.
  • Find out what the future holds for reviews: how many firms plan on enhancing and how.

Download our interactive guide by filling in the form on the right >>

Insights, Research and reports

Asset Managers could save millions through clearing ahead of UMR

Our latest research shows asset managers pulled into phases IV and V of the Uncleared Margin Rules (UMR) will be able to save up to 53% in initial margin when clearing compared to uncleared margining. 

UMR which, among other requirements, mandates margining rules for trades that are not cleared by a central counterparty (CCP) has caused great collateral inefficiencies for many in-scope firms. For asset managers with portfolios above €750bn in notional, clearing a greater volume of OTC trades frees up potentially millions of dollars worth of assets to put to use elsewhere. 


Our latest research shows asset managers pulled into phases IV and V of the Uncleared Margin Rules (UMR) will be able to save up to 53% in initial margin when clearing compared to uncleared margining.  Click To Tweet


The findings come as the industry continues to prepare itself to post an eye watering $2 trillion more margin as a result of the rules, the final phase of which has been pushed out until 2021. This means that thousands of asset managers, that have previously never had to post margin, will have much needed additional time to get their operational houses in order. 

In response to the research our CEO Peter Rippon said “The overarching goal of UMR is to strongly incentivise asset managers to stop trading bilateral uncleared derivatives, and shift towards central clearing.”


“Unfortunately, it’s not as simple as just deciding to clear, firms then need to decide where to clear. A derivative may be eligible to clear at numerous venues, but an asset manager then needs to factor in liquidity and whether they have an existing position, not to mention any pricing discrepancies between the clearing houses.” 


He concluded: “The trouble is, at a time when investors are putting fund performance under the spotlight following Neil Woodford’s woes, the last thing asset managers need is to be restricted from delivering strong returns. This problem can be solved, which is why we are seeing more firms carefully considering the differences in the margin calculated, and level of margin that will be required for UMR.” 

Interested in how this saving can be achieved in greater depth? Take a look at our blog on how leveraging other a variety of optimisation techniques can save you up to 80% on the cost of margin

Insights, Research and reports

2019 UMR Impact Analysis Research Report

This research report summarises the findings from OpenGamma’s ‘Uncleared Margin Rules (UMR) impact analysis’ that we performed on behalf of 15 clients over the last 6 months.

The results are anonymised and aggregated by firm type, but help to quantify the margin impact of SIMM on the industry.

The report also outlines the steps firms can take to minimise the impact of SIMM, such as voluntary clearing and other optimisation approaches.

Download the full report to read:

  • Full UMR impact analysis for 15 firms, including 6 hedge funds, 6 asset managers and 3 pensions funds.
  • How SIMM and Cleared margin requirements impact funding costs.
  • How to optimise margin under UMR, including:
    • The regulatory schedule approach
    • ‘Cherry-picked’ backloading
    • Synthetic risk moves
    • Remaining below the $50m threshold.

We hope you find the research useful. If you have any questions or want to find out more contact us here >

Insights, Research and reports

Poor broker performance reviews leaving fund managers in the dark

Originally published at CityAM


Fund managers may not know how much money they are spending with brokers thanks to a failure to properly review the relationships, a new study has found.

Despite the majority having procedures to review broker relationships in place, only 11 per cent of fund managers actually assess how all their brokers are performing, according to a report by analytics fintech OpenGamma.

The vast majority of the 22 investment management firms surveyed (86 per cent) had a broker review process in place, but there was no industry consensus around how often the reviews should be carried out.

Almost a third (31.8 per cent) of firms said they held reviews at “mixed/multiple” during the year, while just over a fifth (22.8 per cent) held them “semi-annually”.

The majority of the are reliant on manual processes to review broker relationships, with only 21 per cent using live data in the review process.

The biggest challenge when conducting reviews was gathering data and calculating revenue, the report found. Over half – 53 per cent – of firms said they found carrying out broker reviews “very time consuming”

 “Having a process for assessing how brokers are performing is without question very valuable, but only when carried out,” said Opengamma chief operating officer Maxime Jeanniard du Dot.

“While regulations will be a big driver in reviewing broker performance, fund managers also have a strict fiduciary responsibility to investors,” he continued.

“On top of this, as the geopolitical landscape begins to take shape over the coming months, it is clear that fund managers will need to gain a new level of insight to understand the best brokers to do business with.”

Insights, Research and reports

Research: The impact of market volatility on your initial margin explained

Volatile markets can impact your initial margin (IM) dramatically, our new research showing your requirements could jump by up to 94%.

Using the OpenGamma platform and unique set of data we have carried out stress testing research to determine the amount of IM you could be forced to post during periods of high volatility. We outline the results in this new guide, as well as ways to minimise the amount you need to post to protect your business from risk.

A look inside:

Why download this guide:

  • Understand how margin requirements can be impacted by market events
  • Prepare your firm for these jumps
  • See how to correctly stress test your portfolios
  • Determine ways to minimise margin at times of stress.

Download now:

By the way, you can see how the OpenGamma platform works in real-time by viewing our live demo here >


Insights, Research and reports

CSA Changes: Do You Really Understand the Impact?

The enforcement of Daily Variation Margin (VM) exchange on uncleared derivatives for the majority of counterparties comes into effect within five months. With the March 2017 deadline fast approaching in the US, Canada, and Japan and a number of market participants’ current CSAs being non-compliant (in terms of the March rules), the industry has embarked on a wide-scale CSA renegotiation process. The importance of this cannot be overstated.

Negotiating each agreement can be lengthy and, with timing a real and harsh constraint, dealers are keen to have clients sign new standardised CSAs so trading can carry on as usual. But making the right decision is no easy task for the buy-side; it is vital they do so as it can impact the value of portfolios by millions of pounds sterling.

There are three options:

  1. Sign the new standardised CSA on offer, which may have reduced optionality (compared to their current agreements)
  2. Amend their current CSA
  3. Take the “replicate-and-amend” approach (where existing CSAs are replicated and amended to comply with the rules)

So which approach is best?

The first step in the process is to compare the value of portfolios with the economic impact that amending collateral terms in a CSA will have. You may be shocked.

In our analysis, we’ve assumed that a typical book is made up of the following types of trades:

  • Vanilla Interest Rate Swaps
  • Zero-coupon Swaps
  • Inflation Swaps
  • Cross-currency Swaps
  • Swaptions

We valued them under various CSA terms (detailed later), using trades across the three main currencies: GBP, USD, EUR. So what were the key takeaways?

  • The impact of changing from a multi-currency to a single-currency cash CSA on a single vanilla swap deal with 30 years remaining maturity was equivalent to 10% of the notional value.
  • Incorrectly valuing cross-currency swaps by not factoring in the cross-currency basis under a single or multi-currency cash CSAs can lead to a 35bps spread on a single deal.

Now let’s get into the meat of the problem.

To quantify the impact that CSA terms have on a typical portfolio, we have made some additional assumptions: that exposures are directional, with trades having significant notionals.

We quantified the impact of collateral eligibility on PV, with 2 common CSA types:

  • A single-currency (USD in our example) or siloed CSA (CSA1), where each trade exposure is collateralised in cash in the underlying currency of the trade. This results in each trade being valued using OIS discounting in the currency of the trade. For GBP/USD cross-currency swaps, each trade exposure is collateralised in USD cash.
  • A multi-currency CSA (CSA2), where GBP, USD and EUR cash are eligible. Trades are discounted based on the Cheapest-to-Deliver currency.

The Impact

Firstly, let’s take a look at a Vanilla Interest Rate Swap for example, a standard 100m 40Y USD Libor 3M IRS maturing in 30 years. The PVs for this swap under the various CSA terms described are given in Table 1. As you can see the terms of the CSA dictate the valuation of the swap:

CSA Type

CSA1 Single Ccy
OIS disc.

CSA2 Multi Ccy
CTD disc.

Trade PV



Table 1: Valuation of a 40Y USD LIBOR 3M IRS with 30Y remaining maturity under different CSA terms

If we assume that the current discounting method to date has been Libor discounting, the valuations differences to Libor discounting are:

CSA / Valuation Type

Libor disc.

CSA1 Single Ccy
OIS disc.

CSA2 Multi Ccy
CTD disc.

Trade PV




Table 2: Valuation of a 40Y USD LIBOR 3M IRS with 30Y remaining maturity using Libor discounting and under different CSA terms

The difference in values between CSA1 and CSA2 is very interesting, as it quantifies the valuation impact of switching from a multi-currency cash CSA to a single-currency cash CSA (or vice-versa) – in our example, this difference is £10m, or 10% of the trade notional.

Why such a difference? It’s down to the fact that a multi-currency CSA will typically assume a ‘Cheapest-to-Deliver’ approach to collateral selection and therefore discounting. At current market levels, the trade cash flows are discounted using EUR EONIA, translated to the trade currency through cross-currency swaps.

Can the same be said for other OTC products? To answer this question, we turn to cross-currency swaps. Changing the CSA terms for a £100m 30Y GBP / USD cross-currency swap with 20 years remaining maturity yielded the following results:

CSA / Valuation Type

USD OIS disc.

CSA2 CTD disc.
With Xccy basis

No Xccy basis

Trade PV




Table 3: Valuation of a 30Y GBP/USD CCS with 20Y remaining maturity under different CSA terms and valuation approaches

CSA1 and CSA2 valuations factor in the cross-currency basis – the impact of switching from a multi-currency cash CSA to a single-currency cash CSA (or vice-versa) is £7m, or 7% of the trade notional.

Under the valuation approach without the cross-currency basis, the OIS discounting for each of the legs has to be done independently, resulting in a significant PV difference (7% of notional). This is equivalent to a significant spread on the trade: 35 bps in this example.

So to answer the previous question, representative products found in a typical LDI portfolio are affected by changing the terms in a CSA. In particular, they are all significantly affected by switching from a multi-currency CSA to a single-currency CSA.

Correctly valuing cross-currency instruments is also critical: the reflection of the cross-currency basis can add up to a difference in mark-to-market in the tens of millions.

Time to take action

You may be sitting there thinking, “Well my CSAs are clean: mostly cash and some with gilts, so none of this applies to me.” Unfortunately, you would be wrong. As we showed earlier, differences in discounting between a single- and multi-curve framework have a significant impact on the valuation of the portfolio. This applies for future trades as well as current trades. Correctly valuing these instruments by factoring in the impact of collateral terms has the following benefits for asset managers:

  • better pricing, by being able to independently check the dealer quote at the time of execution
  • more accurate NAV reported to investors
  • daily validation against dealers’ marks for collateral exchange

OpenGamma’s independent valuation service is based on our award-winning analytics and can help value your portfolio based on client mandates and CSA terms, by assigning valuation methodologies to each fund and counterparty.

For more information about our solutions, please contact Maxime Jeanniard du Dot at

Insights, Research and reports

Your questions answered: new Q&A available on our MVA research

We’ve received a number of follow-up questions on the back of the research paper on MVA we published last month in collaboration with University College London.

We’ve provided answers to the questions below, and will continue to update with answers to your questions as they come in.  Please send your questions to Marc Henrard.

Q.1. In the paper, several measures are mentioned. Can you describe what they represent? Which of those measures is actually used in the computations?

A: There are two or three probability measures involved, depending on which perspective of the task one takes. There is a pricing measure, applied for option pricing and often called Q. There is the historical or statistical measure, which represents the real-world (economic) probability measure, used for risk measurements/management, and which is often denoted by P. Sometimes it may also be viewed as the subjective measure of the clearing houses. We make this distinction between the two interpretations for the P-measure in the most recent version of the paper to take into account the fact that the data used by clearing houses might not be a perfect representation of the economic reality. Before we begin with the actual IM computations, we calibrate the (swap) price model to swaption data. For this task we apply the pricing dynamics of the price process. The fact that the IM process is based on the dynamics of the swap price, which is calibrated to option data, links our IM valuation to market views as to how the swap price might evolve in the future. However, calibration to option data only might not encode enough information in the IM dynamics. An information supplement is incorporated if we calibrate the IM process also to CCP data, which we assume is historical data. Therefore we need to consider the IM dynamics under the historical measure P so that we have consistent calibration to the clearinghouse data. Given that we have the explicit formulae for the measure changes among the considered probability measures, we are able to produce consistent IM dynamics and thus also consistent margin valuation adjustments. Since MVA can be viewed as a replicable pay-off dependent on the future IM payments, we may compute MVA as if it were an option. Because we have the means to compute the IM under the statistical P-measure, while also having an explicit measure-change relation between the P-measure and the risk-neutral measure Q, we can compute the MVA under the pricing measure as customary when pricing options. Our approach supports both views of a financial market—the risk-neutral one utilized for pricing and hedging, and the historical one applied for risk measurements as in the case of IM computations. It is this consistency between these two main tasks in financial practice that, in our view, is one of the valuable features of the proposed approach.

Q.2. The paper makes use of the ‘rational multi-curve interest rate models’. Why this class of models?

A: When we started this research we had long discussions about the modelling of the interest rate market in a multi-curve framework. The first conclusion was that there is no standard “go-to” model. In the single-curve framework, we might have selected an HJM model. However we wanted a multi-curve interest rate setup with several different properties. The multi-curve models would need to give rise to term structures with realistic dynamics for the base curve and for the spread between OIS and LIBOR. The chosen modelling framework would need to produce parsimonious but nevertheless flexible models. It would also need to represent the market in a realistic way, thus calibrating well to term structures, option volatilities and skew (historical or implied smile). After reviewing several models, we found that the rational multi-curve models are satisfactory in this sense, and offer a desirable level of flexibility for calibration to market data (c.f. Section 6 of the white paper).

Q.3. Can one calibrate your IM model to the IM as computed by the CCPs on the initial date?

A: Yes, and we are continuing work in this direction to improve the relationship to the historical databases utilised by a CCP. As explained in Q.1, after calibrating the swap price model to swaption data, we calibrate the IM model by comparing the model-implied P&L with the P&L implied by data of a CCP. A small residual gap remains and this is one of the aspects we are focusing on in ongoing research work. However, we do obtain a perfect match for the starting value IM0 by multiplying the model number by a constant factor so as to match the clearinghouse IM at time 0.

Q.4.  What back-testing could be done within this approach?

A:We first need to clarify what we would like to validate through back-testing. One of the ultimate goals of our developments is to estimate MVA. It would be interesting to back-test the MVA computations through the analysis of the realized P/L of the hedging strategy proposed by the model. This would be a join testing of the IM dynamic we proposed and the cost of the IM, which is beyond the scope of our current analysis.  For that validation, daily strategies and the residual P/L would be computed on a daily basis for the life of different instruments. The underlying market is the IRS market, where the maturities are relatively long. We would need to collect at least several years of CCP (or other IM model) data to achieve the comparison.

Another validation that could be done is the comparison between the estimated IM distribution of our approach and the actual distribution at a given time interval in the future, let’s say one year. Estimating the model distribution today (in the historical measure) and waiting one year to compare to the realized IM will give us only one data point. To have a meaningful comparison set, we probably need at least 100 points, which would mean, if we want the different experiments to be independent, waiting 100 years. We can probably do with a shorter time interval and overlapping data, but in any case, it will require historical data, including CCP data, for long periods.

A third meaningful notion of back-testing in our context would be to consider IM figures computed by a clearing house, say between 2013 and 2014, calibrate the adjusted IM model to this data and then compute the IM values with the model and the realised swap curves for the period 2014-15. Then one compares the IM values required by the clearinghouse in the period 2014-15 with those obtained with the IM model.

We have started some projects on back-testing related to CCPs and we will report on them in the future.

Q5. Why should one apply your IM model if a perfect match with IM data by clearing houses cannot be obtained?

A: First, as far as we know, current industry methods for the computation of IM do not allow to compute the IM at a future point in time and, in addition, for an arbitrary long margin period of risk. Not having a method for the calculation of future initial margin payments prevents one to be in a position to compute MVA. Secondly, we do not assume that one knows what the algorithm is with which a clearinghouse computes IM. In such a situation, we need to develop an “in-house” model for the computation of IM and then calibrate it to IM values posted in the past by clearinghouses and any additional historical data one may view relevant for the IM computation. Thirdly, one could be of the opinion that the methodology of a clearing house needs innovation even if one knew exactly what the clearing house algorithm is. The proposed IM approach could be just viewed as a kind of dynamical extension of a generic clearing house method for IM computations.

So, returning to the question of “back-testing” in the answer to Q4 above, we provide a framework to calculate IM that is essentially based on taking a risk measure of a price increment over a margin period of risk given the information available at the time the IM is computed. Since the goal is to calculate the IM at a future point in time, it is reasonable to calibrate our underlying price model to options, which provide “forward-looking” market information. Our view is this is another valuable feature in our approach. It is of course conceivable that clearing houses do not adopt such a dynamic and “forward-looking” methodology to compute the IM, and thus one can expect that the clearing house IM and our IM values a priori differ by too much. At this stage one could say that this is the whole point of developing a new approach to IM if it were felt that clearing houses should innovate the way they compute the IM. However, our approach can accommodate an adjustment such that IM data used by clearinghouses can be fed into our model in order for the model IM to be closer to the IM values of the clearinghouses. The current proposed procedure is explained in Section 4 of our white paper.

Insights, Research and reports

Results of our margining survey

In our last newsletter we asked readers to participate in a survey on margining. This produced some interesting results in regard to trends we see gaining momentum in the market in the near future: the rise of lifetime initial margin calculations, and an increasing need for independent CCP model implementation.

The vast majority – 68% – of those surveyed said they are factoring capital, liquidity, and margin into trade pricing. It shows us that the majority of respondents are increasingly aware of the funding and capital costs of collateralising initial margin with high-quality assets for the life of the trades. (You can read our previous blogpost for more on how smarter IM calculation can minimize capital allocation.)

With this in mind, we were interested in the time horizons the market is using to calculate initial margin impact before submitting a new trade.  While 23% aren’t using any pre-trade calculation, 41% are using spot initial margin, and 36% are using Forward Initial Margin (up to 50Y). It was interesting to see that 64% of the respondents are not pricing in the fact that the margin will change over the life of the trade.

With regard to CCP model implementation, we found that only 27% of those surveyed currently validate and challenge CCP margin calls, while 41% validate only, and 32% do neither.  In the previous three months 33% of respondents had challenged between 1 and 5 times, and just 5% each had challenged between 6 and 10 times or more than 10 times.

Given the substantial cost savings potential associated with independent validation, and also the implications for reducing systemic risk , we can only conclude that institutions do not yet have the tools to fully replicate initial margin models on an independent basis.

You can read about our margining tool here or contact us for a demo and subscribe to our newsletter.

Insights, Research and reports

Strata and multi-curve: Interpolation and risk

In this instalment of our blog series focusing on the multi-curve framework in the OpenGamma Strata library, we will discuss the impact of interpolation on forwards and on the delta risk. The goal of the blog is to show a couple of examples and provide a code base for the readers to reproduce and expand those examples. The code can be found on the Strata GitHub repository here.

One important point in the discussion on interpolation in interest rate curve, before the question on how to interpolate should be the question of What to interpolate? I do not plan to discuss that issue in this blog, but you can find my rant on the subject in Section 4.1 of my book on the multi-curve framework (Henrard (2014)). In this blog, I have decided to use interpolation on the zero-rate (continuously compounded), which is probably the most used approach

We start with the graphical representation of impact of interpolation. For this I use that same data as used in the previous blog, calibrating two curves in GBP to realistic market data from 1-Aug-2016. One curve is the OIS (SONIA) curve and the other one is the forward LIBOR 6M curve. The nodes of the curves are the node prescribed by the FRTB rules (see BCBS (2016)). Similar effects would be visible with roughly any selection of node; they were selected for convenience.

We will use through this blog three types of interpolation schemes: linear, double quadratic and natural cubic spline. The reason for the choice is to have interpolation schemes with different properties, I do not claim that those are the best for all (or even any) purposes. The graph of the forward Sonia rates, daily between the data date and 31 years later is displayed in Figure 1. Each date is represented by one point.


Figure 1: Overnight forward rates with three different interpolators.

The forward rates for the Libor curves are provided in Figure 2.


Figure 2: Libor 6M forward rates with three different interpolators.

The graphs are like the familiar one on any note related to curve interpolation. The linear interpolation leads, even with very clean data like in this example, to forward rates profiles with saw tooth effect. Some impacts are very pronounced; in the middle of the graph there is a drop of 40 bps in forward rate from one day to the next. An almost similar drop if visible in the Libor graph, but this time on a period of 6 months (one tenor of the underlying Libor). There is also a large difference between the different interpolators. In the case of the overnight rates, the largest difference in forward rate is 39.29 bps and is observed for the rate starting on 31-Jul-2046 (30 years forward) between the linear and the double quadratic.

The code used to produce the above graphs are available in the blog example Strata code . The code to produce the above curves is roughly 10 lines. The code to export the data to be graphed by Matlab is longer than the calibration code itself.

The interpolation mechanism does not only have an impact on the forward rates, and thus on the present value, but also on the implied risk. We now look at this risk, using the bucketed PV01 or key rate duration as the main risk indicator. For this we take one forward swap with starting date in 6 months and a tenor of 9 years. The fixed rate is 2.5% (above market rate). For that swaps, the par rate bucketed PV01 are provided in Figures 3 to 5.


Figure 3: PV01 with linear interpolation scheme



Figure 4: PV01 with double quadratic interpolation scheme


Figure 5: PV01 with natural cubic spline interpolation scheme

The first point to notice is that the sum of the bucketed PV01 for both the OIS curve and the Libor curve are almost the same for all interpolation schemes. A parallel curve move results in similar change of PV and there is only a very small effect from the level and the curve shape.

The interesting part is obviously the differences. The double quadratic and natural cubic spline are non-local interpolators. The sensitivity to the cash flows in the 5Y to 10Y period extend beyond the 5Y and 10Y nodes. There is a sensitivity to the 3Y and 15Y points, and in the case of the natural cubic spline (NCS), even to the 20Y and 30Y points. The non-locality can lead to non-intuitive hedging results. In the NCS case, the hedging of the swap with a maturity of 9Y and 6M is done with swap up to 10Y but also with swaps of maturity 15Y, 20Y and 30Y. Note also the “wave” effect where the hedging is done alternatively with swap of the opposite direction as the original one and the same direction.

The code to calibrate the three sets of curves and computing the related par market quote sensitivities – the variable mqs in the code –  takes only 10 lines:

/* Calibrate curves */
     ImmutableRatesProvider[] multicurve = new ImmutableRatesProvider;
     for (int loopconfig = 0; loopconfig < NB_SETTINGS; loopconfig++) {
     multicurve = CALIBRATOR.calibrate(configs.get(loopconfig).get(CONFIG_NAME), MARKET_QUOTES, REF_DATA);
     /* Computes PV and bucketed PV01 */
     MultiCurrencyAmount[] pv = new MultiCurrencyAmount;     /* Computes PV and bucketed PV01 */
     MultiCurrencyAmount[] pv = new MultiCurrencyAmount;
     CurrencyParameterSensitivities[] mqs = new CurrencyParameterSensitivities;
     for (int loopconfig = 0; loopconfig < NB_SETTINGS; loopconfig++) {
     pv = PRICER_SWAP.presentValue(swap, multicurve);
     PointSensitivities pts = PRICER_SWAP.presentValueSensitivity(swap, multicurve);
     CurrencyParameterSensitivities ps = multicurve.parameterSensitivity(pts);
     mqs = MQC.sensitivity(ps, multicurve);

It is also interesting to look at a “dynamic” version of the above PV01 report. For that I will produce the same report for swaps with starting dates between the data date and two years later and all of them with a tenor of 8 years. The swap maturities will be between 8 years and 10 years. Let’s start with the linear interpolation scheme as it displays the profile that most people would probably consider as intuitive (at least I do). That profile is displayed in Figure 6. For a 8Y spot starting swap (the left side of the graph), the sensitivity is mainly on the 5Y and 10Y IRS with more on the 10Y. The main sensitivities are represented by the colour curves; the other sensitivities (Libor and OIS) are represented by the dark grey thin lines. When we move to the 2Yx8Y swap along the X-axis, we see the sensitivity to the 10Y increase and the sensitivity to the 5Y decrease down to zero. The sensitivity to the 2Y increase in absolute value but with the opposite sign to the 10Y. The total sensitivities of the swaps are fairly constant.


Figure 6: PV01 profile for linear interpolation scheme.


Figure 7: PV01 profile for double quadratic interpolation scheme.

Let’s jump immediately to the natural cubic spline profile displayed in Figure 8; the Figure 7 for double quadratic is in between the other two. For that profile, I will start the explanation from the right side, with the 2Yx8Y forward swap. The risk is very similar to the one depicted by the linear interpolation. The main sensitivities (2Y and 10Y) are on nodes and the sensitivity in the two cases are similar. When we move away from the nodes, in our case with the starting date moving down from 2Y to valuation date and the maturity from 10Y down to 2Y, a different profile appear. Sensitivities are not anymore only to the nodes surrounding the swap dates. There are now significant sensitivities to the adjacent nodes (3Y and 15Y for example) with the opposite sign to the main sensitivities with a non intuitive behavior. For example the sensitivity to the 15Y node increases when the maturity of the swap moves away from that node. A significant sensitivity appear on the 2Y when the start date of the swap moves away from the 2Y node and is the larger in this profile for the spot starting swap.


Figure 8: PV01 profile for natural cubic spline interpolation scheme.

Obviously the sensitivity to different nodes will lead to recommended hedging using swaps with those tenors. The 8Y swap will be hedged with 2Y, 3Y, 5Y, 10Y and 15Y swaps. Maybe not the most intuitive approach. It is also possible to easily compute the require notional to hedge each sensitivity, and that will be described in a forthcoming instalment of this blog.

The goal of this blog was not to provide a definitive answer on which interpolation scheme to use for curve calibration – I don’t with there is such a definitive answer – but to show that a personal analysis is important. The analysis can be done easily with the right tools. The code used to produce figures 3 to 5 is available in the examples associated to this blog. The main part of the code consists in roughly 10 lines of code, for calibration and sensitivity computation. The other figures required a loop around that code to produce the sensitivities for roughly 500 swaps (2Y of business days).

In this instalment of the multi-curve blogs, we have provided some example of interpolation impact on forward rate and risk computations. We also showed how easy it is to obtain those results in the Strata code.


Henrard, M. (2014). Interest rate modelling in the Multi-curve Framework: Foundation, Evolution and Implementation. Palgrave.

BCBS (2016). Minimum capital requirements for market risk. Basel Committee on Banking Supervision, January 2016.

Insights, Research and reports

Comparing SIMM vs. CCP margin delivers unexpected results

On 9th June, it became clear that the European Union would delay the implementation of the mandatory bilateral margin. See for example the Risk article or the Bloomberg article. But for the rest of the world, the 1 September 2016 date is still a major deadline.

The mandatory clearing for vanilla derivatives and the mandatory bilateral margin for all derivatives are part of the same regulatory push to try to overcome some of the banking system’s weaknesses evidenced by the crisis which started in 2007. The mandatory clearing is already effective in the US and was effective in Europe for some users a couple of days ago on 21 June 2016.

The mandatory clearing and bilateral margins are two faces of the same framework, so it makes sense to compare them. Moreover, in their supplementary information on the bilateral margin, the US regulator indicated that such a comparison is appropriate. More exactly in the document “Margin and Capital Requirements for Covered Swap Entities,” the US agencies indicate (on page 138) that:

In light of the clear competitive forces that will exist between cleared and non-cleared swaps, the Agencies believe that it is appropriate to compare the initial margin requirements of non-cleared swaps to those of similar cleared swaps.

At OpenGamma, we are ideally positioned to do this comparison. Our solutions allow us to compute Initial Margin (IM) for numerous CCP and bilateral methodologies, including ISDA SIMM (“SIMM”), of which we are an official licensee, using our award winning Margin calculation solution.

In this blog, I report some comparisons for single swap portfolios. I take this approach as it helps to identify features of the SIMM model that are not obvious when SIMM is discussed in the context of larger portfolios.

Comparing the current IM for simple portfolios should only be one step that any financial institution takes in deciding its strategy around derivatives. Many other factors should be taken into account, like the cost of becoming a member of a CCP or a clearing client, the netting effect of central clearing, the type of collateral accepted, the margining transparency, the liquidity difference between bilateral and cleared trades, etc.

Key Findings

By doing this comparison we found that the relation between cleared and bilateral margins is a lot more complex than the often mentioned ‘square root of 2’ ratio, which comes from the ratio between the margin period of risk (MPR) of 10 days for bilateral margins and 5 days for central clearing. We investigated the impact of currency, tenor and notional in order to highlight these complexities.


The first set of comparisons are related to a single swap. We compared different CCPs to SIMM:

  • The swaps used are IRS v 3M or 6M and OIS; both the payer and receiver (from the point of view of the member) are analyzed
  • Due to the simple methodology, the SIMM payer and receiver have the same IM
  • We selected ATM swaps with tenors between 1Y and 30Y
  • The swap currencies are EUR and USD, the notional of the swaps is USD 100m converted in swap currency
  • The data used is from the beginning of June 2016
  • We compute CCP margin for members; some CCPs charge a higher margin for clients clearing through a broker.

The results for EUR are displayed in Figure 1. The SIMM IM numbers are above the CCP numbers even if only slightly above. The minimum requirement for bilateral margins are a Margin Period of Risk (MPR) of 10 days and a 99% VaR. For CCPs, the MPR for swaps is 5 days, but the CCPs use different methodologies, like Expected Shortfall 99.75% with 10 years of historical data, VaR at 99% with 3 years historical data and stress period, or VaR at 99.7% with more than 5 years of historical data. In each of the cases the CCPs use add-ons on top of the base IM. Depending on the historical data and the add-ons, there is not a clear order between SIMM and CCPs even if SIMM tends to be above CCP figures. The relation between SIMM and CCPs also change through time.


Figure 1: Comparison of IM between two CCP and SIMM for EUR IRS and OIS with different tenors.

The results for USD are displayed in Figure 2. In general, the SIMM IM numbers are above or in line with the CCP numbers. The results are not as clear as in the EUR case. The reason for this is probably due to the SIMM methodology. In SIMM, the same risk weights multiplier to basis point sensitivities are used for all main currencies. The EUR rates are lower and thus the sensitivity for one basis point are higher for the same notional. This explains why the SIMM numbers for USD are below the numbers for EUR.

Also note that for long-term swaps, the schedule based approach provides numbers below both SIMM or CCPs. However, this is largely irrelevant for larger banks; using the schedule has been ruled out for their interbank exposure as it does not offer any netting benefits between trades.


Figure 2: Comparison of IM between some CCPs and SIMM for USD IRS and OIS with different tenors.

Fixed rates

In the second set of comparisons, we look at 10Y swaps with different fixed rates. The results are displayed in Figure 3. The ATM swaps have a PV and thus a currency exposure of 0. Due to the different way the CCPs treat currency exposure, the profiles, which represent swaps with non-zero currency exposure, can be quite different. For SIMM, the profile depends on the base currency; here we have selected EUR. The currency risk is taken into account for the IM computation. Here we selected JPY for the swap because it is not the base currency for any of the IM methodologies analysed.


Figure 3: Comparison of IM between some CCPs and SIMM for JPY IRS and OIS with different fixed rates.

In general, the IM for JPY swap is lower than the one for USD and EUR. This is due to lower volatility for JPY historical data used by CCPs and is recognised by SIMM which has a special low risk weight for JPY. The interesting feature of that graph is the shape of the profiles. The ‘blue’ CCP does not include the currency exposure of the trade in the IM computation; the IM graph is roughly shaped like a line. The graph for SIMM looks more like a ‘smile’. This is due to the interaction/correlation between the interest risk and the currency risk. As the ‘green’ CCP and our SIMM implementation have base currencies different from the CCP base currency and take the currency exposure into account, the graph shapes are different, but both of them are convex curves and not lines.

Non-linear behavior

In the last comparison for this blog, we take a 10Y ATM swap in USD and check the IM for different notionals. As a first approximation, we can think that the IM is linear in the notional and this comparison does not bring more information than the one presented in the first test. But all CCP methodologies have some kind of add-on which are not linear in the size; the IM increases faster than the notional. This is particularly true for the ‘liquidity add-ons’. It is interesting to check from which size the CCPs are becoming more expensive in terms of margin than a bilateral IM. Note that SIMM has some provision for a “concentration risk” (CR) which would make SIMM also not linear in the size, but for the moment the CR threshold is undefined and the method is still linear in the size. The results are presented in Figure 4.


Figure 4: Comparison of IM between some CCPs and SIMM for USD IRS and OIS with different notionals.

Figure 4 presents the results with a log-scale for the notional and a linear scale for the ratio between CCP and SIMM. For all CCPs, above a certain notional, the IM is larger than SIMM. For this example with only one 10Y USD swap, the notional where the non-linearity kicks in is around 10 billions. The ratio can be above 2 for large amounts. For more complex portfolios, especially when the overall risk is small and there are large offsets between different maturities, the ratio can be even larger.


The comparison between CCP’s IM and bilateral IM (represented here by SIMM) for swaps is not a straightforward task. Even for a single swap, there is a diversity of results depending on side, currency, rate, notional and the date the analysis is run. In most cases, SIMM is above CCP figures, but this is not always the case. Portfolio effects and add-on bring even more complexity to the process, but this will be the subject of another blog.

The detailed analysis of these issues requires not only the tool to run the computation for the different CCPs and bilateral margining in parallel but also a continuous monitoring of the data used.

Please contact us for more details on the results presented in this study or information on our margin calculation offering, which was used for this study and is used by market participants to perform similar analyses.

1 2
Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from Youtube
Consent to display content from Vimeo
Google Maps
Consent to display content from Google
Consent to display content from Spotify
Sound Cloud
Consent to display content from Sound