- the meaning of VaR
- how VaR is calculated in practice
- some of the difficulties associated with VaR for portfolio containing derivatives
Introduction
The financial institution had no idea what might result from some of their more exotic transactions, often involving derivatives.
There has grown up the concept of Value at Risk as a measure of the possible downside from an investment or portfolio.
Definition of Value at Risk
Value at Risk is an estimate, with a given degree of confidence, of how much one can lose from one’s portfolio over a given time horizon.
The degree of confidence is typically set at 95%, 97.5%, 99% etc. The time horizon is supposed to be the timescale associated with the orderly liquidation of the portfolio, meaning the sale of assets at a sufficiently low rate for the sale to have little effect on the market. Thus the VaR is an estimate of a loss that can be realized, not just a ‘paper’ loss.
VaR is calculated assuming normal market circumstances, meaning that extreme market conditions such as crashes are not considered, or are examined separately. Thus, effectively, VaR measures what can be expected to happen during the day-to-day operation of an institution.
The calculation of VaR requires at least having the following data:
- the current prices of all assets in the portfolio
- their volatilities
- the correlations between them
Marking to market
- If the assets are traded we can take the prices from the market
Marking to model
- For OTC contracts we must use some ‘approved’ model for the prices, such as a Black-Scholes-type model
- Usually, one assumes that the movement of the components of the portfolio are random and drawn from normal distributions.
VaR for a single asset
We hold a quantity \(\Delta\) of a stock with price S and volatility \(\sigma\). We want to know with 99% certainty what is the maximum we can lose over the next week.
Assumptions
- the distribution is normal
- The time horizon is so small. So the mean is zero.
The standard deviation of the stock price over this time horizon is
\[ \sigma S \left(\frac{1}{52}\right)^{1/2} \]
sine the timestep is 1/52 of a year.
More generally
\[ VaR = -\sigma\Delta S(\delta t)^{1/2}\alpha(1 - c) \]
where \(\alpha()\) is the inverse cumulative distribution function for the standardized normal distribution.
The assumption of zero mean is valid for short time horizon: The standard deviation of the return scales with the square root of time but the mean scales with time itself.
If the rate of this drift is \(\mu\) becomes
\[ VaR = \Delta S(\mu \delta t - \sigma \delta t^{1/2}\alpha(1 - c)) \]
Note that the drift is not the risk-neutral, but the real drift.
VaR for a portfolio
VaR for the portfolio. consisting of M assets with a holding of \(\Delta_i\) of the ith asset is
\[ -\alpha(1 - c)\delta t^{1/2}\sqrt{\sum_{j = 1}^M\sum_{i = 1}^M{\Delta_i \Delta_j \sigma_i \sigma_j \rho_{ij} S_i S_j}} \]
Several criticisms can be made of this definition of VaR
- Returns are not normal
- volatilities and correlations are notoriously difficult to measure, and it does not allow for derivatives in the portfolio
VaR for derivatives
The key point about estimating VaR for a portfolio containing derivatives is that, even if the change in the underlying is normal, the essential non-linearity in derivatives means that the change in the derivative can be far from normal. Nevertheless, if we are concerned with very small movements in the underlying, we may be able to approximate for the sensitivity of the portfolio to changes in the underlying by the option’s delta
. For larger movements we may need to take a higher-order approximation.
The delta approximation
\[ \sigma S \delta t^{1/2}\Delta \]
- \(\sigma S \delta t^{1/2}\): standard deviation of the distribution of the underlying
- \(\Delta\): the delta of the whole position, the sensitivity of all of the relevant options to the particular underlying
It is but a small, and obvious, step to the following estimate for the VaR of a portfolio containing options:
\[ \alpha (1 - c)\delta t^{1/2}\sqrt{\sum_{j = 1}^M\sum_{i = 1}^M{\Delta_i \Delta_j \sigma_i \sigma_j \rho_{ij} S_i S_j}} \]
Here \(\Delta_i\) is the rate of change of the portfolio with respect to the ith asset.
Which volatility do I use?
For a single underlying, the delta approximation to VaR depends on the standard deviation
\[ \sigma S \delta t^{1/2}\Delta \]
The volatility should be implied volatility theoretically. However, actual volatility represents the movement in the stock. The actual volatility should be used.
The delta-gamma approximation
\[ \delta V = \frac{\partial V}{\partial S}\delta S + \frac{1}{2}\frac{\partial^2 V}{\partial S^2}(\delta S)^2 + \frac{\delta V}{\delta t}\delta t + \dots \]
Assuming
\[ \delta S = \mu S \delta t + \sigma S \delta t^{1/2} \phi \]
where \(\phi\) is drawn from a standardized normal distribution, we can write
\[ \delta V = \frac{\partial V}{\partial s} \sigma S \delta t^{1/2}\phi + \delta t \left(\frac{\partial V}{\partial S} \mu S + \frac{1}{2}\frac{\partial^2 V}{\partial S^2} \sigma^2 S^2 \phi^2 + \frac{\partial V}{\partial t}\right) + \dots \]
\[ \delta V = \Delta \sigma S \delta t^{1/2} \phi + \delta t (\Delta \mu S + \frac{1}{2}\Gamma\sigma^2 S^2 \phi^2 + \Theta) + \dots \]
To leading order, the randomness in the option value is simply proportional to that in the underlying. To the next order there is a deterministic shift in \(\delta V\) due to the deterministic drift of S and the theta of the option. More importantly, however, the effect of the gamma is to introduce a term that is non-linear in the random component of \(\delta S\).
\(\delta V\) is quadratic in \(\phi\), \(\delta V\) must satisfy the following constraint
\[ \delta V \geq -\frac{\Delta^2}{2 \Gamma} ~ if ~ \Gamma > 0 ~ or ~ \delta V \leq -\frac{\Delta^2}{2 \Gamma} ~ if ~ \Gamma < 0 \]
Is this critical value for \(\phi\) in the part of the tail in which we are interested?
If we cannot use an approximation we may have to run simulations using valuation formulae.
Positive gamma is good for a portfolio. Negative gamma is bad for a portfolio.
- Positive gamma: limited downside
- negative gamma: limited upside
Use of valuation models
The obvious way around the problems associated with non-linear instruments is to use a simulation for the random behavior of the underlying and then use valuation formulae or algorithms to deduce the distribution of the changes in the whole portfolio. This is the ultimate solution to the problem but has the disadvantage that it can be very slow.
Fixed-income portfolios
Fixed-income asset can be thought of as a derivative of the yield.
Simulations
The simulation must use real returns and not risk-neutral.
Monte Carlo
the generation of normally distributed random numbers
Bootstrapping
using actual asset price movements taken from historical data
2 possible ways of generating future scenarios
- one-step procedure with a model for the distribution of returns over the required time horizon
- multi-step procedure with a data/model for short periods for a longer time model
By this method we generate a distribution of possible future scenarios based on historical data.
advantages
- naturally incorporates any correlation between assets, and any non-normality in asset price changes
- doesn’t capture any autocorrelation in the data
disadvantages
- requires a lot of historical data
- historical data may not correspond to completely different economic circumstances
Use of VaR as a performance measure
Sharpe ratio
\[ \frac{\mu - r}{\sigma} \]
\[ \frac{\text{daily P\&L}}{\text{daily VaR}} \]
Introductory extreme value theory
Extreme value theory
- modern techniques for estimating tail risk
- for representing more accurately the outer limits of returns distributions since this is where the most important risk is
Distribution of maxima/minima
여러 번의 샘플에서 최대값만 모은 것들의 분포
- \(\xi = 0\): Gumbel distribution
- \(\xi < 0\): Weibull
- \(\xi > 0\): Frechet
If Xi are independent, identically distributed random variables and
\[ x = max(X_1, X_2, \dots, X_n) \]
then the distribution of x coverages to
\[ exp\left(-(1 + \frac{\xi(x - \mu)}{\sigma})^{-1/\xi}\right) \]
When \(\xi = 0\) this is a Gumbel distribution, when \(\xi < 0\) it is a Weibull and when \(\xi > 0\) a Frechet. Frechet is the one of interest in finance because it is associated with fat tails.
https://en.wikipedia.org/wiki/Gumbel_distribution
In probability theory and statistics, the Gumbel distribution (also known as the type-I generalized extreme value distribution) is used to model the distribution of the maximum (or the minimum) of a number of samples of various distributions.
https://en.wikipedia.org/wiki/Weibull_distribution
cdf
https://en.wikipedia.org/wiki/Fréchet_distribution
The Fréchet distribution, also known as inverse Weibull distribution, is a special case of the generalized extreme value distribution. It has the cumulative distribution function
Peaks over threshold
y를 임계점으로 이를 넘어가는 확률을 계산하는 식이 Generalized Pareto distribution에 수렴
모델이 MLE를 만족하므로 과거 데이터를 사용해서 미래에 외삽하여 사용해도 괜찮다는 것의 증명을 기댓값으로 한 것으로 보임
Consider the probability that loss exceeds u by an amount y (given that the threshold u has been exceeded):
\[ F_u(y) = P(X - u \leq y | X > u) \]
This can be approximated by a Generalized Pareto Distribution:
\[ 1 - (1 + \frac{\xi X}{\beta})^{-1/\xi} \]
For heavy tails we have \(\xi > 0\) in which case not all moments exist:
\[ E[X^k] = \infty ~ for ~ k \geq 1 / \xi \]
The parameters in the models are fitted by maximum likelihood estimation, using historical data for example, and from that we can extrapolate to the future.
Coherence
Artzner, Delbaen, Eber & Heath가 1997년 coherent를 risk measure로 제시
coherent는 다음과 같은 특징을 가짐
- sub-additivity: \(\rho(X + Y) \leq \rho(X) + \rho(Y)\)
- monotonicity: if X ≤ Y for each scenario then \(\rho(X) \leq \rho (Y)\)
- positive homogeneity: for all \(\lambda > 0\), \(\rho(\lambda X) = \lambda\rho(X)\)
- translation invariance: for all constant c, \(\rho(X + c) = \rho(X) - c\)
In finance, there is no measure satisfying these all criteria.
Summary
Risk management gets much more not complicated, but messy and time-consuming
.