Forget the headlines: What you really should know about the nasdaq default

Forget the headlines: What you really should know about the nasdaq default
28th September 2018 Jo Burnham

You’ve probably seen the headlines about the trader who blew ‘€100m hole in Nasdaq’s Nordic power market’. This was a timely reminder of the potential impact of market defaults, coming as it did 10 years after the failure of Lehman Brothers.

Every clearing house member rightly worries about their contribution to the mutual default fund being used to cover losses following the default of another member. But how likely is the type of loss seen at Nasdaq, and what can the CCPs do to make sure that they are not the ones that get caught out?

History has shown that it is very rare for the member’s margin not to be sufficient to cover any losses seen in a default. So are the CCPs getting their risk management right, or have they just been lucky?

Regardless of the algorithm used, Initial Margin is parameterised to cover losses under normal market conditions. By this we mean to a given confidence level, dictated by a mixture of regulation and the CCP’s own risk appetite.

Initial margin was never meant to cover losses under stress market conditions. The type of spreads seen in the Nasdaq power markets, at 17 times previously observed levels, definitely count as a stress event. Therefore, it is not surprising that Initial Margin in this case wasn’t sufficient to cover the loss.

Did the Nasdaq use of SPAN contribute to the size of the loss?

Like many ETD CCPs, Nasdaq use SPAN as the margin algorithm for their power markets. Everyone agrees that SPAN is ripe for change it is 30 years old after all. But would the losses have been less if Nasdaq had moved to a more modern algorithm, for example one based on VaR. The answer is almost certainly no.

There have been a number of changes over the years to the SPAN algorithm to make it more suitable as a margin algorithm for the more complex markets, for example power. In particular, the way that spreads are treated (both within a contract and between contracts) has been enhanced so that it can take into account factors such as the seasonality of commodity products. Nasdaq make extensive use of this ‘multi-tier’ capability, and set separate rates for different combinations of expiry and product they currently have nearly 15,000 intercommodity offset defined. Any credits for spreads between Nordic and German power would have been calculated based on specific expiries.

In some ways this leads to a very similar result to that achieved by using Historical VaR, on the assumption that the SPAN parameters are set to achieve the same level of coverage. The VaR calculation would rely on historic correlations, which were seen to break down in this case. The problem with SPAN here is not whether it can accommodate the complexity of commodity products, but rather the operational issues involved in setting all these parameters.

In addition, in some ways, by using SPAN you have more control. Because the parameters are set ‘manually’ they can take into account future events. So by looking, for example, at long range weather forecasts or dates for elections, it is possible to adjust the SPAN parameters to consider their potential impact.

So why aren’t more big losses seen in default?

The default of Einar Aas on Nasdaq was very unusual. The idea that all defaults occur because the member couldn’t meet their margin call isn’t the way that it normally happens.

Most defaults are much more like Lehman Brothers (although luckily smaller and with less global impact). Discussions are held, the receivers are called in, and the default is declared over the weekend giving the CCPs and members the time to sort themselves out before the markets open on Monday.

The closest previous default to this one is probably Griffin Trading. There the inability to pay the margin on one market led to a default being declared across multiple markets. But no call was made on any default funds, despite the Griffin portfolios being composed of individual trader strategies. Maybe it was just lucky that they were concentrated in interest rate products.

But on the whole, Initial Margin is usually sufficient to cover losses seen in a default, and money is returned to the receivers. So why is this the norm?

Most clearing member portfolios are a lot more diverse than that of Aas. This diversification effect means that even though there may be losses in one group of products or strategies, these will be compensated for by gains in another area.

Would the losses have been less if Nasdaq had moved to a more modern algorithm, for example one based on VaR? The answer is almost certainly no. Click To Tweet

Plus, as explained above, most defaults occur because the member has a more general issue rather than being specifically related to a large loss on the market. On its own an exceptional  market move against a member won’t lead to a call on the default fund it needs to coincide with the clearing member not being able to cover their margin.  

So the chances are that, in the unlikely event of a default, the CCP will be left with a reasonable portfolio to close out. And there are measures that they take to make sure that this is the case.

What do CCPs do to reduce the risk of a ‘bad’ default?

All CCPs monitor the credit standing of their clearing members. There will be certain minimum criteria that firms need to become a member, including capital requirements and company profile (there aren’t many CCPs that have individual traders as clearing members). Although not a fail safe, it helps to ensure that members don’t default. But if the worst happens, there are other measures that the CCP take to make sure they have sufficient funds to cover any losses.

The assumption is usually that the only margin that CCPs take from members is the Initial Margin calculated by algorithms such as SPAN, but this isn’t the case. They all have the ability to request Additional Margin if they feel it is necessary.

Backtesting will highlight where margin historically has not been sufficient to cover observed losses. When you combine this with the stress testing, which is used to determine the size of the default fund, then you have the ability to identify portfolios which may cause issues. As an example, there are CCPs that require any member with large stress losses to pay additional margin. The size of these theoretical losses are of course dependent on the stress scenarios and given the large moves seen in the spreads at Nasdaq it is doubtful whether this would have been seen as one of the theoretical but plausible scenarios that regulation says should be included in the stress testing.

New regulation has increased the types of risks that CCPs should be including in their margin calculation. Liquidity and concentration margin are now routinely calculated by many CCPs. They rely on parameterisation to quantify the level of this risk, so as a straight calculation again may not have been sufficient to cover the level of losses observed. But the additional oversight of the positions means that a red flag will be raised for particularly worrying portfolios, with the clearing member being asked to either reduce their position or provide addition collateral.

Even within the Initial Margin calculation, many CCPs will now go beyond the requirements of regulation. If using Historic VaR, they may calculate this based on a higher confidence level, or include the tail risk by using Expected Shortfall (average of the top losses) rather than VaR. Although for ETD the required holding period (Margin Period of Risk) is 2 days, many CCPs will use a longer period for less liquid products. And the more modern algorithms will include stress scenarios. These are generally there to prevent procyclicality (they don’t have to put up the margin if there is a big market move, which would likely lead to bigger market moves), but this has the advantage of meaning that some unexpected shifts will be covered within the Initial Margin.

Most importantly,  CCPs are now calculating risk intra-day based on real time prices. This isn’t always perfect some of the systems still need to catch up but it means that additional margin will be requested immediately if any loss over initial margin is seen.

What will be the likely fallout?

Nasdaq have already increased the margin rates on the power market and engaged a third-party to review their risk management. I expect that a number of other CCPs, especially those covering similar markets or who have individual traders as clearing members, will be doing the same.

The fact that SPAN a 30 year old algorithm was responsible for calculating the margin that proved not to be sufficient is likely to add impetus to the moves already in place at various CCPs to look at a replacement. Most of these are assessing Historical VaR, but as this would also potentially have under-margined in this scenario they will need to consider the additional factors they may need to include.

The default process is another area which might require some review. For the ETD markets the close out of the defaulting member positions has shifted from trading out the position – which admittedly only works for futures or future style options to an OTC style auction. The auction process tends to lead to a low price being obtained for the portfolio, increasing the losses.

Whether the balance between default fund and margin is correct is an area that clearing members would probably like to be looked at. If the Initial Margin is higher the clearing members would need to provide more collateral, but if the Initial Margin is lower and then they have to top up the default fund and that is lost cash a much worse position to be in.

All we can hope is that the CCPs take this default as a warning and make any necessary changes to their operational processes and risk management. And that should mean that a default leading to a call on other clearing members to replenish the default fund should become even less likely.