Your questions answered: new Q&A available on our MVA research

Your questions answered: new Q&A available on our MVA research
26th November 2015 Marc Henrard

We’ve received a number of follow-up questions on the back of the research paper on MVA we published last month in collaboration with University College London.

We’ve provided answers to the questions below, and will continue to update with answers to your questions as they come in.  Please send your questions to Marc Henrard.

Q.1. In the paper, several measures are mentioned. Can you describe what they represent? Which of those measures is actually used in the computations?

A: There are two or three probability measures involved, depending on which perspective of the task one takes. There is a pricing measure, applied for option pricing and often called Q. There is the historical or statistical measure, which represents the real-world (economic) probability measure, used for risk measurements/management, and which is often denoted by P. Sometimes it may also be viewed as the subjective measure of the clearing houses. We make this distinction between the two interpretations for the P-measure in the most recent version of the paper to take into account the fact that the data used by clearing houses might not be a perfect representation of the economic reality. Before we begin with the actual IM computations, we calibrate the (swap) price model to swaption data. For this task we apply the pricing dynamics of the price process. The fact that the IM process is based on the dynamics of the swap price, which is calibrated to option data, links our IM valuation to market views as to how the swap price might evolve in the future. However, calibration to option data only might not encode enough information in the IM dynamics. An information supplement is incorporated if we calibrate the IM process also to CCP data, which we assume is historical data. Therefore we need to consider the IM dynamics under the historical measure P so that we have consistent calibration to the clearinghouse data. Given that we have the explicit formulae for the measure changes among the considered probability measures, we are able to produce consistent IM dynamics and thus also consistent margin valuation adjustments. Since MVA can be viewed as a replicable pay-off dependent on the future IM payments, we may compute MVA as if it were an option. Because we have the means to compute the IM under the statistical P-measure, while also having an explicit measure-change relation between the P-measure and the risk-neutral measure Q, we can compute the MVA under the pricing measure as customary when pricing options. Our approach supports both views of a financial market—the risk-neutral one utilized for pricing and hedging, and the historical one applied for risk measurements as in the case of IM computations. It is this consistency between these two main tasks in financial practice that, in our view, is one of the valuable features of the proposed approach.

Q.2. The paper makes use of the ‘rational multi-curve interest rate models’. Why this class of models?

A: When we started this research we had long discussions about the modelling of the interest rate market in a multi-curve framework. The first conclusion was that there is no standard “go-to” model. In the single-curve framework, we might have selected an HJM model. However we wanted a multi-curve interest rate setup with several different properties. The multi-curve models would need to give rise to term structures with realistic dynamics for the base curve and for the spread between OIS and LIBOR. The chosen modelling framework would need to produce parsimonious but nevertheless flexible models. It would also need to represent the market in a realistic way, thus calibrating well to term structures, option volatilities and skew (historical or implied smile). After reviewing several models, we found that the rational multi-curve models are satisfactory in this sense, and offer a desirable level of flexibility for calibration to market data (c.f. Section 6 of the white paper).

Q.3. Can one calibrate your IM model to the IM as computed by the CCPs on the initial date?

A: Yes, and we are continuing work in this direction to improve the relationship to the historical databases utilised by a CCP. As explained in Q.1, after calibrating the swap price model to swaption data, we calibrate the IM model by comparing the model-implied P&L with the P&L implied by data of a CCP. A small residual gap remains and this is one of the aspects we are focusing on in ongoing research work. However, we do obtain a perfect match for the starting value IM0 by multiplying the model number by a constant factor so as to match the clearinghouse IM at time 0.

Q.4.  What back-testing could be done within this approach?

A:We first need to clarify what we would like to validate through back-testing. One of the ultimate goals of our developments is to estimate MVA. It would be interesting to back-test the MVA computations through the analysis of the realized P/L of the hedging strategy proposed by the model. This would be a join testing of the IM dynamic we proposed and the cost of the IM, which is beyond the scope of our current analysis.  For that validation, daily strategies and the residual P/L would be computed on a daily basis for the life of different instruments. The underlying market is the IRS market, where the maturities are relatively long. We would need to collect at least several years of CCP (or other IM model) data to achieve the comparison.

Another validation that could be done is the comparison between the estimated IM distribution of our approach and the actual distribution at a given time interval in the future, let’s say one year. Estimating the model distribution today (in the historical measure) and waiting one year to compare to the realized IM will give us only one data point. To have a meaningful comparison set, we probably need at least 100 points, which would mean, if we want the different experiments to be independent, waiting 100 years. We can probably do with a shorter time interval and overlapping data, but in any case, it will require historical data, including CCP data, for long periods.

A third meaningful notion of back-testing in our context would be to consider IM figures computed by a clearing house, say between 2013 and 2014, calibrate the adjusted IM model to this data and then compute the IM values with the model and the realised swap curves for the period 2014-15. Then one compares the IM values required by the clearinghouse in the period 2014-15 with those obtained with the IM model.

We have started some projects on back-testing related to CCPs and we will report on them in the future.

Q5. Why should one apply your IM model if a perfect match with IM data by clearing houses cannot be obtained?

A: First, as far as we know, current industry methods for the computation of IM do not allow to compute the IM at a future point in time and, in addition, for an arbitrary long margin period of risk. Not having a method for the calculation of future initial margin payments prevents one to be in a position to compute MVA. Secondly, we do not assume that one knows what the algorithm is with which a clearinghouse computes IM. In such a situation, we need to develop an “in-house” model for the computation of IM and then calibrate it to IM values posted in the past by clearinghouses and any additional historical data one may view relevant for the IM computation. Thirdly, one could be of the opinion that the methodology of a clearing house needs innovation even if one knew exactly what the clearing house algorithm is. The proposed IM approach could be just viewed as a kind of dynamical extension of a generic clearing house method for IM computations.

So, returning to the question of “back-testing” in the answer to Q4 above, we provide a framework to calculate IM that is essentially based on taking a risk measure of a price increment over a margin period of risk given the information available at the time the IM is computed. Since the goal is to calculate the IM at a future point in time, it is reasonable to calibrate our underlying price model to options, which provide “forward-looking” market information. Our view is this is another valuable feature in our approach. It is of course conceivable that clearing houses do not adopt such a dynamic and “forward-looking” methodology to compute the IM, and thus one can expect that the clearing house IM and our IM values a priori differ by too much. At this stage one could say that this is the whole point of developing a new approach to IM if it were felt that clearing houses should innovate the way they compute the IM. However, our approach can accommodate an adjustment such that IM data used by clearinghouses can be fed into our model in order for the model IM to be closer to the IM values of the clearinghouses. The current proposed procedure is explained in Section 4 of our white paper.