Edhec-Risk
Risk - August 03, 2006

Interview with Bruno Dupire

Mr Bruno Dupire headed the Derivatives Research teams at Société Générale, Paribas Capital Markets and Nikko Financial Products before joining Bloomberg in New York to develop pricing, risk management and arbitrage models. He is best known for having pioneered the widely used Local Volatility model (simplest extension of the Black-Scholes-Merton model to fit all option prices) in 1993 and subsequent stochastic volatility extensions. Prior to this, he obtained a Master’s Degree in Artificial Intelligence, a PhD in Numerical Analysis and introduced the use of Neural Networks for financial time series forecasting. He is a Fellow and Adjunct Professor at NYU and he is in the Risk magazine “Hall of Fame” of the 50 most influential people in the history of Derivatives and Risk Management. He is the recipient of the 2006 “Cutting edge research” award from Wilmott magazine.


Bruno Dupire

Could you tell us why you decided to make a special presentation at EDHEC Business School?

Bruno Dupire: There are several reasons. Firstly, EDHEC’s excellent reputation in asset management, which is not a subject on which I focus directly, but is part of the topics covered by Bloomberg in the context of its global mission, and EDHEC is an excellent “brand name.” Also the pleasure of revisiting a region that is both very enjoyable and dynamic. I’m delighted to see that EDHEC has a leading role here.

You are best known for your work on option pricing and stochastic volatility models, and particularly for your extension to the Black-Scholes model. Now that a few years have gone by since it was first introduced, could you give us an assessment of how it has been applied in the industry? In what ways have industry practices been changed by your work?

Bruno Dupire: The local volatility is something that I developed in 1992/1993. In 1992 at Société Générale, in a discrete version, i.e. with trinomial trees, through a deterministic volatility model depending on the spot rate and time, which enables all the market option prices to be recovered. In 1993 at BNP Paribas I developed a continuous time version, which is frankly the best way to view the matter, since it can naturally be rendered discrete with the help of partial differential equations. At the time, I think that people were perhaps not quite ready to absorb it and what I have observed more than ten years later is that people have a much better understanding of the usefulness and limitations of the model. In any event, all the banks have adopted or use this model.

The choice of model is essentially Black-Scholes, local volatility or stochastic volatility, such as for example the Heston model, or possibly jumps. But as far as traders’ practices are concerned, it still involves deformations of Black-Scholes, with possibly local volatility. In the end, in spite of the fact that all research teams have devoted a lot of effort to stochastic volatility and jumps, there is a striking divergence between the evolution of research groups and the non-evolution of traders as a whole. Probably because it is better to have a simple tool, and know the limits of its application, in order to adapt it to fit the market.

So, all the banks now have or use local volatility models. Its limitations are fairly clear, even if they are often overstated. Applied to equity markets, the problem is that they generate forward smiles or forward skews that are too flat, but what I find striking is that people criticise local volatility by saying that it is a poor forecast of future volatility, which to me makes no more sense than criticising the concept of instantaneous forward rate. The main contribution of local volatility, its principal quality, is to enable this notion of instantaneous forward variance to be extracted.

…and it can then be locked in rather than being seen as a predictor of future volatility…

Bruno Dupire: Exactly. Not only can we calculate it but we can also lock it in and construct an instantaneous variance swap conditional on the level of the index, which means that if we convert the implicit market volatility into local volatility and we wish to express a disagreement between our own prediction and the local volatility, it is possible to take advantage of it, in relation to a position in a theoretical frictionless market. So it is a value that we can lock in, and it makes no sense to say that it is a poor model; it is simply the first step to take in order to respect the market price. If we then wish to have a more complicated model, like a stochastic volatility model, it will essentially be a ‘noisy’ variant of local volatility. I have a result from 1996 which states that the conditional expectation of instantaneous variance, which we condition with the level of the index, will be the local variance, or local volatility squared, which we can read in the market price of options.

Is it an unbiased estimate of the expected value or the historical probability?

Bruno Dupire: No, as it contains risk premia and supply and demand effects; it is a bit like an interest rate model – you have the term structure of interest rates so you can compute the forward rates – within a model that fits the initial term structure, the future curve will be the sum of three components: the forward curve, convexity, and a centred random component.

Regarding the strategies that are increasingly used in the alternative universe, and that we follow closely here at EDHEC, such as volatility arbitrage, volatility as an asset class, etc., do you think that a manager following that type of strategy could consider this instantaneous forward volatility as a kind of benchmark, a neutral view of the market, around which he could express his active views?

Bruno Dupire: Yes, absolutely. If he reasons naturally in terms of volatility according to different future scenarios, for example different levels of the index in the future, the natural benchmark is indeed the local volatility. In my opinion, that is what can be locked in, so that if he finds that there is a local volatility of 10% if the CAC40 is at 5,500 at the end of the year, and he thinks that this is not enough, he could indeed construct an arbitrage portfolio which, if the level is around 5,500, will capture this volatility differential, i.e. the volatility that was produced with regard to the 10% level locked in initially. It’s logical, therefore, to have that initial reference. Having said that, for volatility arbitrage, there is a panoply of strategies that can be deployed.

Could you briefly introduce the different ways of carrying out volatility arbitrage to our readers?

Bruno Dupire: I’d be happy to. The first step is to identify the instruments that capture the different shades of volatility and see how they can be associated or differentiated so as to extract stochastic arbitrage opportunities. If we look at a simple situation, with one underlying, one index, for example the S&P 500, in that case we can look for arbitrage between the historical volatility and the implicit volatility, which is done frequently in the market, but to do so correctly one must understand that historical volatility must be conditioned by future levels. It is not enough to observe that historical volatility is 15% and implicit market volatility is 18% and that therefore there is an arbitrage opportunity. First of all, the implicit volatility will greatly depend on the strike of the options, so a simple vision like that would systematically lead one to sell puts with the most expensive volatility. But one must understand that the historical volatility is constantly subjected to future scenarios. To know whether it is really advantageous to sell a put that is deeply out-of-the-money, one shouldn’t compare the implicit volatility of the put with the historical volatility estimated without conditioning, but instead understand that the put will be sensitive to the variations in the underlying in the zone where it has a strong gamma, in other words in a region where the index is at its lowest levels.

One must therefore be able to calculate conditional estimators of the historical volatility which involve conditioning by the lowest level. At Bloomberg we have developed a project which is called ‘fair skew’, where we calculate these conditional volatilities.

… through parametric or non-parametric methods?

Bruno Dupire: It is a non-parametric method. It’s essentially based on back-testing of delta hedging. We call them ‘breakeven volatilities.’ The question we ask is, on average, for certain levels of moneyness, maturities, what is the implicit volatility that should have been used to sell the option and then delta hedge it to end up with a neutral P&L.

So that’s the classic arbitrage, though slightly refined, between historical volatility and implicit volatility. Paradoxically, there are also arbitrages between one historical volatility and another historical volatility over different frequencies. One calculates the historical volatility over several time scales, for example, daily volatility, weekly volatility, hourly, monthly volatility, every five minutes, etc. Generally we will find relatively different levels. What is interesting is that if we observe fairly large differences, for example (annualised) daily volatility of 20% versus weekly volatility of 17%, it is possible to exploit that through a strategy which depends on holding the index.

If you have bought an option for example and you are wondering how to delta hedge it, given that you know that there is this volatility differential, there is an incentive to delta hedge it on a daily basis to take advantage of the large variations, However, if you have sold the option, you might wish to hedge it less frequently, for example every week, which we don’t necessarily recommend because that introduces a lot of noise, but let’s say that this is the case to pay for only 17% volatility. If we consider an option position that has a constant gamma, which corresponds to having a collection of strikes, rather than a unique strike, to be exposed to volatility irrespective of the level of the index, in a case like that we can imagine that we have bought and sold an option simultaneously, and in a fictitious way we delta hedge the long position on a daily basis and the short position on a weekly basis. When we look at the net strategy, because the two option positions offset each other, and a large part of the delta hedge position is offset as well, the residual part is an intra-week mean reversion strategy.

Broadly speaking, it corresponds to setting a reference level on the Monday morning but then, every day, depending on whether we are above it or below it, selling or buying the option for a number of contracts proportional to the difference between the reference level on Monday morning and the current value. We continue that until the end of the week and fix a new reference level the following Monday, and so on for several weeks. The P&L realised on this strategy will be the difference between the historical daily volatility squared and the historical weekly volatility squared. One can also have fun with phase arbitrage. People say for example that the daily volatility calculated from three in the afternoon to three in the afternoon is lower than from ten in the morning until ten in the morning. We can apply the same principle for phase arbitrage.

A final point is that the right estimate of volatility for an option trader is the one that corresponds to his delta hedging strategy. For example, he will rely on the daily estimate rather than the weekly estimate if he hedges on a daily basis. We can however go further and examine the optimal strategy more closely with a ‘move-based’ rather than a ‘time-based’ perspective, which leads to other estimates of the volatility.

Other instruments allow for bets on the volatility of an index, notably plain vanilla European options (one strike and one maturity), and variance swaps, which allow the general level of historical volatility to be captured. This corresponds to delta hedging a portfolio with a series of strikes which can yield a logarithmic pattern (TAKE AWAY: – with dx more or less constant). Generally, the delta hedge eliminates the first order risk, the directional risk, and leaves the exposure to second order risk, the volatility. For a naked position this is a pure risk and in the case of a variance swap it is exactly what allows the bank to reconstitute or synthesise the payoff it gives to the client.

So we have a link between European options and variance swaps, given that these are markets that are quite different - hedge funds for example make pure volatility bets using variance swaps and don’t necessarily want to work with a portfolio of options and delta hedge it. We can therefore observe certain discrepancies. Technically, a major issue is the extrapolation of the volatilities, because the strikes are spread out over a limited range.

There are also listed volatility products. Essentially the CBOE with the VIX contract, which was previously an ‘at-the-money’ volatility index, with one month to maturity on the S&P 100. Now it is a global level index, which corresponds to the collection of strikes used for variance swaps, still with a maturity of one month, but on the S&P 500. Bloomberg has developed a pricing page with upper bound domination arguments and an estimate of fair price, because it is possible to replicate the square of the VIX – the VIX is the square root of something that is replicable, so it presents a concave shape and therefore an option bias. There are even options on the VIX contract now.

There is also a not so popular contract on the variance realised, which allows bets to be made on the level of historical volatility in the summer of 2007 for example, a contract which can be traded now until the beginning of the summer of 2007 and then up to the end of its term. Therefore, during the period in question, there is a part that has already been realised and the uncertain part will decrease up until maturity, and we can develop the arbitrage relationship between all of these contracts.

A final area with regard to indices is options on realised variance, which is a very active market, notably in the United States with the Bank of America and Merrill Lynch. This is a contract that allows calls and puts to be traded on the realised variance, which can be seen as a sort of option on a variance swap. The classic way in which the market handles a product of that type is through a dynamic strategy that involves holding variance swaps. This is based on the idea that if for example we have an option on the price of pork bellies and a futures contract for pork bellies, we will also do a delta hedge on the future; what we need is the volatility of the futures contract. Through analogy, people say that what determines the price is essentially the volatility of the variance swap. In fact, the variance swap corresponds to an indiscriminate collection of strikes, while we lose skew information which itself contains very dense information on the uncertainty of the volatility realised.

To finish with volatility arbitrage, if we move on to arbitrage with several underliers, for example arbitrage on the volatility spread between the CAC and the DAX, we always pay attention with all of these strategies to have several strikes if possible. If not, it is easy to be right about the convergence of a spread, for example, but still lose money. The same thing can be applied of course to all arbitrage on divergence, selling an option on an index while buying individual options. We often see variance swaps on indices versus variance swaps on individual securities.

Finally, what areas do you think will be of importance in the future and what are the directions or ideas for future research?

Bruno Dupire: Correlation is a huge issue. It comes into play in the products (multi-assets, hybrids), in the models (price/volatility correlation), in asset allocation (diversification) and in credit (between names). Credit modelling is still in its infancy and it is only now that we are witnessing promising attempts to handle correlation. We also have correlation products, notably in equities and in FX.

Much effort is now being expended on the study of market microstructure to try to better understand price formation. The main applications are high frequency arbitrage and optimal trade execution. These considerations of liquidity and frictions are now central, after having been disregarded for a long time by theoreticians as inelegant and obstructing financial theory advances. Psychology also comes into play, with behavioural finance inspecting the irrational behaviour of agents.

Finally, I think it is time to go back to the initial role of derivative products, which is to optimally transfer risks. Individuals and corporations face certain risks and financial institutions have the technology to tailor products that fit these specific needs, and to reshape their volatility position to absorb the risk. We are currently witnessing plenty of equity retail products that make no sense to the users and create risks for the banks that sell them. Pension related products give interesting challenges: in their design, which requires a good understanding of the user’s benchmark, in their maturity, which can be several decades, and in the underlyings, such as inflation, that they entail.


URL for this document:
http://www.edhec-risk.com/Interview/RISKArticle.2006-08-03.1854

Hyperlinks in this document: