QMC methods can be considered as an alternative to Monte Carlo simulation. Instead of (pseudo) random numbers, Quasi Monte Carlo algorithms use the elements of low discrepancy sequences to simulate underlying values.
The discrepancy of a set of points
measures how
evenly these points are distributed in the unit cube. The general
measure of discrepancy is given by:
![]() |
(16.11) |
The discrepancy of a set is the largest difference between the number of
points in a subset and the measure of the subset. If we define to be the family
of subintervals
, then we get a special measure, the star-discrepancy:
![]() |
(16.12) |
For the star-discrepancy measure and reasonable assumption on the nature of the function that has to be integrated an upper bound on the error is given by the following theorem:
![]() |
(16.13) |
This means that the error is bounded from above by the product of
the variation , which in our case is model and payoff dependent and the
star-discrepancy of the sequence. The bound cannot be used
for an automatic error estimation since the variation and the
star-discrepancy cannot be computed easily. It has been shown though
that sequences exist with a star-discrepancy of the order
. All sequences with this asymptotic upper bound are called
low-discrepancy sequences Niederreiter (1992). One particular low-discrepancy
sequence is the Halton sequence.
We start with the construction of the one-dimensional Halton sequence within the interval .
An element of this sequence is calculated by using the following equation:
![]() |
(16.14) |
![]() |
(16.15) |
![]() |
(16.16) |
![]() |
(16.17) |
XploRe
provides quantlets to generate pseudo random numbers and
low discrepancy sequences. For the generation of the pseudo random
numbers we use
|
where seqnum is the number of the random generator according to Table 16.1, d is the dimension of the random vector and n the number of vectors generated.
|
The generation of low discrepancy sequences is provided by
|
where seqnum is the number of the low discrepancy sequence according to Table 16.2.
|
Figure 16.2 shows that two dimensional Halton points are much more equally spaced than pseudo random points. This leads to a smaller error at least for ``smooth'' functions.
![]() ![]() |
The positive effect of using more evenly spread points for the simulation task is shown in Figure 16.3.
The points of a low-discrepancy sequence are designed in order to fill the space evenly without any restrictions on the independence of sequence points where as the pseudo random points are designed to show no statistically significant deviation from the independence assumption. Because of the construction of the low discrepancy sequences one cannot calculate an empirical standard deviation of the estimator like for Monte Carlo methods and derive an error approximation for the estimation. One possible way out of this dilemma is the randomization of the low-discrepancy sequences using pseudo random numbers i.e. to shift the original quasi random numbers with pseudo random numbers Tuffin (1996). If
![]() |
(16.18) |
for a uniformly distributed value .
Then we can calculate an empirical standard deviation of the price estimates
for different sequences
for independent values
which can be used as a measure for the error.
Experiments with payoff functions for European options show that this
randomization technique reduces the convergence rate proportionally.
The advantage of the Quasi Monte Carlo simulation compared to the
Monte Carlo simulation vanishes if the dimension increases. Especially
the components with a high index number of the first elements in
low-discrepancy sequences are not evenly distributed Niederreiter (1992). Figure
16.4 shows that the th and
th component of the first
1000 Halton points are not evenly distributed. But the result for the first
10000 points of the sequence shows that the points become more evenly spread
if the number of points increases.
However by using the Brownian Bridge path construction method we can limit the effect of the high dimensional components on a simulated underlying path and the corresponding path variable for the most common path dependent options, Morokoff (1996). This method start with an empty path with known start value and calculates at each step the underlying value for a time point with maximum time distance to all other time points with known underlying value until the whole path is computed. Experimental results show that we can still get a faster convergence of the QMC simulation for options up to 50 time points if we apply this path construction method.