The partial Monte-Carlo method is a Monte-Carlo simulation that is performed
by generating underlying prices given the statistical model and then valuing
them using the simple delta-gamma approximation.
We denote as a vector of risk factors,
as the change in
portfolio value resulting from
,
as
,
as a
confidence level and
as a loss threshold.
We also let
Equation 1.1 defines the class of Delta-Gamma normal methods. The detailed procedures to implement the partial Monte-Carlo method are as follows
The partial Monte-Carlo method is flexible and easy to implement. It provides
the accurate estimation of the VaR when the loss function is approximately
quadratic. However, one drawback is that for a large number of risk factors, it
requires a large number of replications and takes a long computational time.
According to Boyle et al. (1998), the convergence rate of the Monte-Carlo
estimate is
. Different variance reduction techniques have been
developed to increase the precision and speed up the process. In the next
section, we will give a brief overview of different types of variance
reduction techniques, Boyle et al. (1998).
We assume
, where
are independent samples from
the standard normal distribution. In our case, the function
is defined as
Based on replications, an unbiased estimator of the
is given by
is also an unbiased estimator of . Therefore,
is an unbiased estimator of as well.
The intuition behind the antithetic method is that the random inputs obtained
from the collection of antithetic pairs
are more regularly
distributed than a collection of
independent samples. In particular, the
sample mean over the antithetic pairs always equals the population mean of 0,
whereas the mean over finitely many independent samples is almost surely
different from 0.
The basic idea of control variates is to replace the evaluation of an unknown
expectation with the evaluation of the difference between the unknown quantity
and another expectation whose value is known. The standard Monte-Carlo estimate
of
is
. Suppose we
know
. The method of control variates uses the known
error
Let
denote an independent standard normal random vector
used
to drive a simulation. The sample moments will not exactly match those of the
standard normal. The idea of moment matching is to transform the
to
match a finite number of the moments of the underlying population. For example,
the first and second moment of the normal random number can be matched by
defining
The moment matching method can be extended to match covariance and higher moments as well.
Like many variance reduction techniques, stratified sampling seeks to make the
inputs to simulation more regular than the random inputs. In stratified
sampling, rather than drawing randomly and independent from a given
distribution, the method ensures that fixed fractions of the samples fall
within specified ranges. For example, we want to generate
-dimensional
normal random vectors for simulation input. The empirical distribution of an
independent sample
will look only roughly like the true
normal density; the rare events - which are important for calculating the VaR
- will inevitably be underrepresented. Stratified sampling can be used to
ensure that exactly one observation
lies between the
and
quantiles (
) of the
-th marginal distribution for each of
the
components. One way to implement that is to generate
independent
uniform random numbers
on
(
)
and set
The Latin Hypercube Sampling method was first introduced by
McKay et al. (1979). In the Latin Hypercube Sampling method, the range of
probable values for each component is divided into
segments of
equal probability. Thus, the
-dimensional space, consisting of
parameters, is partitioned into
cells, each having equal probability.
For example, for the case of dimension
and
segments, the
parameter space is divided into
cells. The next step is to choose
10 cells from the
cells. First, the uniform random numbers are
generated to calculate the cell number. The cell number indicates the segment
number the sample belongs to, with respect to each of the parameters. For
example, a cell number (1,8) indicates that the sample lies in the segment 1
with respect to first parameter, segment 10 with respect to second parameter.
At each successive step, a random sample is generated, and is accepted only if
it does not agree with any previous sample on any of the segment numbers.
The technique builds on the observation that an expectation under one probability measure can be expressed as an expectation under another through the use of a likelihood ratio. The intuition behind the method is to generate more samples from the region that is more important to the practical problem at hand. In next the section, we will give a detailed description of calculating VaR by the partial Monte-Carlo method with importance sampling.
In the basic partial Monte-Carlo method, the problem of sampling changes in
market risk factors is transformed into a problem of sampling the
vector
of underlying standard normal random variables. In importance
sampling, we will change the distribution of
from
to
.
The key steps proposed by Glasserman et al. (2000) are to
calculate
The next task is to choose and
so that the Monte-Carlo estimator
will have minimum variance. The key to reducing the variance is making the
likelihood ratio small when
. Equivalently,
and
should be
chosen in the way to make
more likely under
than under
. The steps of the algorithm are following:
We follow the decomposition steps described in the section 1.2
and find the cumulant generating function of given by
If we take the first derivative of
with respect to
,
we will get:
After we get
, we can follow the same steps in
the basic partial Monte-Carlo simulation to calculate the VaR. The only
difference is that the fraction of scenarios in which losses exceed
is
calculated by:
An important feature of this method is that it can be easily added to an existing implementation of partial Monte-Carlo simulation. The importance sampling algorithm differs only in how it generates scenarios and in how it weights scenarios as in equation (1.59).
|
The function
VaRestMC
uses the different types of variance reduction to
calculate the VaR by the partial Monte-Carlo simulation. We employ the variance
reduction techniques of moment matching, Latin Hypercube Sampling and
importance sampling. The output is the estimated VaR. In order to test the
efficiency of different Monte-Carlo sampling methods, we collect data from the
M
D
*BASE
and construct a portfolio consisting of three German stocks (Bayer,
Deutsche Bank, Deutsche Telekom) and corresponding 156 options on these
underlying stocks with maturity ranging from 18 to 211 days on May 29, 1999.
The total portfolio value is 62,476 EUR. The covariance matrix for the stocks
is provided as well. Using the Black-Scholes model, we also construct the
aggregate delta and aggregate gamma as the input to the Quantlet. By choosing
the importance sampling method, 0.01 confidence level, 1 days forecast horizon
and 1,000 times of simulation, the result of the estimation is as follows.
Contents of VaRMC [1,] 771.73
It tells us that we expect the loss to exceed 771.73 EUR or 1.24% of portfolio value with less than 1% probability in 1 day. However, the key question of the empirical example is that how much variance reduction is achieved by the different sampling methods. We run each of the four sampling methods 1,000 times and estimated the standard error of the estimated VaR for each sampling method. The table (1.1) summarizes the results.
As we see from the table (1.1), the standard error of the importance
sampling is 84.68% less than those of plain-vanilla sampling and it
demonstrates that approximately 42 times more scenarios would have to be
generated using the plain-vanilla method to achieve the same precision obtained
by importance sampling based on Delta-Gamma approximation. These results
clearly indicate the great potential speed-up of estimation of the VaR by using
the importance sampling method. This is why we set the importance sampling as
the default sampling method in the function
VaRestMC
. However, the
Latin Hypercube sampling method also achieved 42.31% of variance reduction.
One advantage of the Latin Hypercube sampling method is that the decomposition
process is not necessary. Especially when the number of risk factors (
) is
large, the decomposition (
) dominates the sampling (
) and
summation
in terms of computational time. In this case, Latin Hypercube
sampling may offer the better performance in terms of precision for a given
computational time.