Value-at-Risk: An Overview of Analytical VaR

  print

by Romain Berry
J.P. Morgan Investment Analytics and Consulting

romain.p.berry@jpmorgan.com

In the last issue, we discussed the principles of a sound risk management function to efficiently manage and monitor the financial risks within an organization. To many risk managers, the heart of a robust risk management department lies in risk measurement through various complex mathematical models. But even one who is a strong believer in quantitative risk management would have to admit that a risk management function that heavily relies on these sophisticated models cannot add value beyond the limits of understanding and expertise that the managers themselves have towards these very models. Risk managers relying exclusively on models are exposing their organization to events similar to that of the sub-prime crisis, whereby some extremely complex models failed to accurately estimate the probability of default of the most senior tranches of CDOs1. Irrespective of how you put it, there is some sort of human or operational risk in every team within any given organization. Models are valuable tools but merely represent a means to manage the financial risks of an organization.

This article aims at giving an overview of one of the most widespread models in use in most of risk management departments across the financial industry: Value-at-Risk (or VaR)2. VaR calculates the worst expected loss over a given horizon at a given confidence level under normal market conditions. VaR estimates can be calculated for various types of risk: market, credit, operational, etc. We will only focus on market risk in this article. Market risk arises from mismatched positions in a portfolio that is marked-to-market periodically (generally daily) based on uncertain movements in prices, rates, volatilities and other relevant market parameters. In such a context, VaR provides a single number summarizing the organization’s exposure to market risk and the likelihood of an unfavorable move. There are mainly three designated methodologies to compute VaR: Analytical (also called Parametric), Historical Simulations, and Monte Carlo Simulations. For now, we will focus only on the Analytical form of VaR. The two other methodologies will be treated separately in the upcoming issues of this newsletter. Part 1 of this article defines what VaR is and what it is not, and describes the main parameters. Then, in Part 2, we mathematically express VaR, work through a few examples and play with varying the parameters. Part 3 and 4 briefly touch upon two critical but complex steps to computing VaR: mapping positions to risk factors and selecting the volatility model of a portfolio. Finally, in Part 5, we discuss the pros and cons of Analytical VaR.

Part 1: Definition of Analytical VaR

VaR is a predictive (ex-ante) tool used to prevent portfolio managers from exceeding risk tolerances that have been developed in the portfolio policies. It can be measured at the portfolio, sector, asset class, and security level. Multiple VaR methodologies are available and each has its own benefits and drawbacks. To illustrate, suppose a $100 million portfolio has a monthly VaR of $8.3 million with a 99% confidence level. VaR simply means that there is a 1% chance for losses greater than $8.3 million in any given month of a defined holding period under normal market conditions.

It is worth noting that VaR is an estimate, not a uniquely defined value. Moreover, the trading positions under review are fixed for the period in question. Finally, VaR does not address the distribution of potential losses on those rare occasions when the VaR estimate is exceeded. We should also bear in mind these constraints when using VaR. The ease of using VaR is also its pitfall. VaR summarizes within one number the risk exposure of a portfolio. But it is valid only under a set of assumptions that should always be kept in mind when handling VaR.

VaR involves two arbitrarily chosen parameters: the holding period and the confidence level. The holding period corresponds to the horizon of the risk analysis. In other words, when computing a daily VaR, we are interested in estimating the worst expected loss that may occur by the end of the next trading day at a certain confidence level under normal market conditions. The usual holding periods are one day or one month. The holding period can depend on the fund’s investment and/or reporting horizons, and/or on the local regulatory requirements. The confidence level is intuitively a reliability measure that expresses the accuracy of the result. The higher the confidence level, the more likely we expect VaR to approach its true value or to be within a pre-specified interval. It is therefore no surprise that most regulators require a 95% or 99% confidence interval to compute VaR.

Part 2: Formalization and Applications

Analytical VaR is also called Parametric VaR because one of its fundamental assumptions is that the return distribution belongs to a family of parametric distributions such as the normal or the lognormal distributions. Analytical VaR can simply be expressed as:

(1)(1)
where:
  • VaRα is the estimated VaR at the confidence level 100 × (1 - α)%.
  • xα is the left-tail α percentile of a normal distribution   . xα is described in the expression  where R is the expected return. In order for VaR to be meaningful, we generally choose a confidence level of 95% or 99%. xα is generally negative.
  • P is the marked-to-market value of the portfolio.

The Central Limit Theorem states that the sum of a large number of independent and identically distributed random variables will be approximately normally distributed (i.e., following a Gaussian distribution, or bell-shaped curve) if the random variables have a finite variance. But even if we have a large enough sample of historical returns, is it realistic to assume that the returns of any given fund follow a normal distribution? Thus, we need to associate the return distribution to a standard normal distribution which has a zero mean and a standard deviation of one. Using a standard normal distribution enables us to replace xα by zα through the following permutation:

(2)(2)

which yields:
(3)(3)

zα is the left-tail α percentile of a standard normal distribution. Consequently, we can re-write (1) as:
(4)(4)

Example 1 – Analytical VaR of a single asset

Suppose we want to calculate the Analytical VaR at a 95% confidence level and over a holding period of 1 day for an asset in which we have invested $1 million. We have estimated3 μ (mean) and σ (standard deviation) to be 0.3% and 3% respectively. The Analytical VaR of that asset would be:


This means that there is a 5% chance that this asset may lose at least $46,347 at the end of the next trading day under normal market conditions.

Example 2 – Conversion of the confidence level

Assume now that we are interested in a 99% Analytical VaR of the same asset over the same one-day holding period. The corresponding VaR would simply be:


There is a 1% chance that this asset may experience a loss of at least $66,789 at the end of the next trading day. As you can see, the higher the confidence level, the higher the VaR as we travel downwards along the tail of the distribution (further left on the x-axis).

Example 3 – Conversion of the holding period

If we want to calculate a one-month (21 trading days on average) VaR of that asset using the same inputs, we can simply apply the square root of the time5:

(5)(5)

Applying this rule to our examples above yields the following VaR for the two confidence levels:



Example 4 – Analytical VaR of a portfolio of two assets

Let us assume now that we have a portfolio worth $100 million that is equally invested in two distinct assets. One of the main reasons to invest in two different assets would be to diversify the risk of the portfolio. Therefore, the main underlying question here is how one asset would behave if the other asset were to move against us. In other words, how will the correlation between these two assets affect the VaR of the portfolio? As we aggregate one level up the calculation of Analytical VaR, we replace in (4) the mean of the asset by the weighted mean of the portfolio, μP and the standard deviation (or volatility) of the asset by the volatility of the portfolio, σP. The volatility of a portfolio composed of two assets is given by:

(6)(6)

where

  • w1 is the weighting of the first asset
  • w2 is the weighting of the second asset
  • σ1 is the standard deviation or volatility of the first asset
  • σ2 is the standard deviation or volatility of the second asset
  • ρ1,2is the correlation coefficient between the two assets

And (4) can be re-written as:

(7)(7)

Let us assume that we want to calculate Analytical VaR at a 95% confidence level over a one-day horizon on a portfolio composed of two assets with the following assumptions:

  • P = $100 million
  • w1 = w2 = 50%6
  • μ1 = 0.3%
  • σ1 = 3%
  • μ2 = 0.5%
  • σ2 = 5%
  • ρ1,2 = 30%
(8)(8)

(8)

Example 5 – Analytical VaR of a portfolio composed of n assets

From the previous example, we can generalize these calculations to a portfolio composed of n assets. In order to keep the mathematical formulation handy, we use matrix notation and can re-write the volatility of the portfolio as:

(9)(9)

where:
  • w is the vector of the weights of the n assets
  • w’ is the transpose vector of w
  • Σ is the covariance matrix of the n assets

Practically, we could design a spreadsheet in Excel (Exhibit 1) to calculate Analytical VaR on the portfolio in Example 4. The cells in grey are the input cells.

It is easy from there to expand the calculation to a portfolio of n assets. But be aware that you will soon reach the limits of Excel as we will have to calculate n(n-1)/2 terms for your covariance matrix.

Part 3: Risk Mapping

In order to cope with an increasing covariance matrix each time you diversify your portfolio further, we can map each security of the portfolio to common fundamental risk factors and base our calculations of Analytical VaR on
Exhibit 1: Excel Spreadsheet to calculate Analytical VaR for a portfolio of two assets (click to enlarge)
Exhibit 1

these risk factors. This process is called reverse engineering and aims at reducing the size of the covariance matrix and speeding up the computational time of transposing and multiplying matrices. We generally consider four main risk factors: Spot FX, Equity, Zero-Coupon Bonds and Futures/Forward. The complexity of this process goes beyond the scope of this overview of Analytical VaR and will need to be treated separately in a future article.

Part 4: Volatility Models

We can guess from the various expressions of Analytical VaR we have used that its main driver is the expected volatility (of the asset or the portfolio) since we multiply it by a constant factor greater than 1 (1.6449 for a 95% VaR, for instance) – as opposed to the expected mean, which is simply added to the expected volatility. Hence, if we have used historical data to derive the expected volatility, we could consider how today’s volatility is positively correlated with yesterday’s volatility. In that case, we may try to estimate the conditional volatility of the asset or the portfolio. The two most common volatility models used to compute VaR are the Exponential Weighted Moving Average (EWMA) and the Generalized Autoregressive Conditional Heteroscedasticity (GARCH). Again, in order to be exhaustive on this very important part in computing VaR, we will discuss these models in a future article.

Part 5: Advantages and Disadvantages of Analytical VaR

Analytical VaR is the simplest methodology to compute VaR and is rather easy to implement for a fund. The input data is rather limited, and since there are no simulations involved, the computation time is minimal. Its simplicity is also its main drawback. First, Analytical VaR assumes not only that the historical returns follow a normal distribution, but also that the changes in price of the assets included in the portfolio follow a normal distribution. And this very rarely survives the test of reality. Second, Analytical VaR does not cope very well with securities that have a non-linear payoff distribution like options or mortgage-backed securities. Finally, if our historical series exhibits heavy tails, then computing Analytical VaR using a normal distribution will underestimate VaR at high confidence levels and overestimate VaR at low confidence levels.

Conclusion

As we have demonstrated, Analytical VaR is easy to implement as long as we follow these steps. First, we need to collect historical data on each security in the portfolio (we advise using at least one year of historical data – except if one security has experienced high volatility, which would suggest a shorter period of time). Second, if the portfolio has a large number of underlying positions, then we would need to map them against a more manageable set of risk factors. Third, we need to calculate the historical parameters (mean, standard deviation, etc.) and need to estimate the expected prices, volatilities and correlations. Finally we apply (7) to find the Analytical VaR estimate of the portfolio.

As always when building a model, it is important to make sure that it has been reviewed, fully tested and approved, that a User Guide (including any potential code) has been documented and will be updated if necessary, that a training has been designed and delivered to the members of the risk management team and to the recipients of the outputs of the risk management function, and finally that a capable person has been allocated the oversight of the model, its current use, and regular refinement.



1CDO stands for Collaterized Debt Obligation. These instruments repackage a portfolio of average- or poor-quality debt into high-quality debt (generally rated AAA) by splitting a portfolio of corporate bonds or bank loans into four classes of securities, called tranches.
2Pronounced V’ah’R.
3Note that these parameters have to be estimated. They are not the historical parameters derived from the series.
4Note that zα is to be read in the statistical table of a standard normal distribution.
5This rule stems from the fact that the sum of n consecutive one-day log returns is the n-day log return and the standard deviation of n-day returns is √n × standard deviation of one-day returns.
6These weights correspond to the weights of the two assets at the end of the holding period. Because of market movements, there is little likelihood that they will be the same as the weights at the beginning of the holding period.

Up

Copyright © 2013 JPMorgan Chase & Co. All rights reserved.