next up previous contents index
Next: 3.3 Black-Box Metamodels of Up: 3. Design and Analysis Previous: 3.1 Introduction


3.2 Simulation Techniques in Computational Statistics

Consider the well-known definition of the $ t$ statistic with $ n-1$ degrees of freedom:

$\displaystyle t_{n-1} = \frac{\bar{x}-\mu }{s_{x} /\sqrt{n}}\,,$ (3.1)

where the $ x_{i}$ ( $ i=1,\ldots,n$) are assumed to be normally (Gaussian), independently, and identically distributed (NIID) with mean $ \mu$ and variance $ \sigma ^{2}$:

$\displaystyle x_{i} \in {\mathrm{NIID}} (\mu , \sigma ) (i = 1, \ldots , n)\,.$ (3.2)

Nearly $ 100\,$years ago, Gossett used a kind of Monte Carlo experiment (without using computers, since they were not yet invented), before he analytically derived the density function of this statistic (and published his results under the pseudonym of Student). So, he sampled $ n$ values $ x_{i}$ (from an urn) satisfying (3.2), and computed the corresponding value for the statistic defined by (3.1). This experiment he repeated (say) $ m$ times, so that he could compute the estimated density function (EDF) - also called the empirical cumulative distribution function (ECDF) - of the statistic. (Inspired by these empirical results, he did his famous analysis.)

Let us imitate his experiment, in the following simulation experiment (this procedure is certainly not the most efficient computer program).

  1. Read the simulation inputs: $ \mu$ (mean), $ \sigma ^{2}$ (variance), $ n$ (sample size), $ m$ (number of macro-replicates, used in step 4.
  2. Take $ n$ samples $ x_{i} \in$   NIID$ (\mu , \sigma)$ (see (3.2)) and Chap. II.2 by L'Ecuyer).
  3. Compute the statistic $ t_{n - 1} $ (see (3.1)).
  4. Repeat steps 2 and 3 $ m$ times.
  5. Sort the $ m$ values of $ t_{n - 1} $.
  6. Compute the EDF from the results in step 5.
To verify this simulation program, we may compare the result (namely the EDF) with the results that are tabulated for Student's density function; for example, does our EDF give a $ 90\,{\%}$ quantile that is not significantly different from the tabulated value (say) $ t_{n -
1; 0.90} $. Next we may proceed to the following more interesting application.

We may drop the classic assumption formulated in (3.2), and experiment with non-normal distributions. It is easy to sample from such distributions (see again Chap. II.2). However, we are now confronted with several so-called strategic choices (also see step 1 above): Which type of distribution should be selected (lognormal, exponential, etc.); which parameter values for that distribution type (mean and variance for the lognormal, etc.), which sample size (for asymptotic, 'large' $ n$, the $ t$ distribution is known to be a good approximation for our EDF).

Besides these choices, we must face some tactical issues: Which number of macro-replicates $ m$ gives a good EDF; can we use special variance reducing techniques (VRTs) - such as common random numbers and importance sampling - to reduce the variability of the EDF? We explain these techniques briefly, as follows.

Common random numbers (CRN) mean that the analysts use the same (pseudo)random numbers (PRN) - symbol $ r$ - when estimating the effects of different strategic choices. For example, CRN are used when comparing the estimated quantiles $ \widehat{t}_{n - 1; 0.90} $ for various distribution types. Obviously, CRN reduces the variance of estimated differences, provided CRN creates positive correlations between the estimators $ \widehat{t}_{n - 1; 0.90} $ being compared.

Antithetic variates (AV) mean that the analysts use the complements $ (1 - r)$ of the PRN ($ r$) in two 'companion' macro-replicates. Obviously, AV reduces the variance of the estimator averaged over these two replicates, provided AV creates negative correlation between the two estimators resulting from the two replicates.

Importance sampling (IS) is used when the analysts wish to estimate a rare event, such as the probability of the Student statistic exceeding the $ 99.999\,{\%}$ quantile. IS increases that probability (for example, by sampling from a distribution with a fatter tail) - and later on, IS corrects for this distortion of the input distribution (through the likelihood ratio). IS is not so simple as CRN and AV - but without IS too much computer time may be needed. See Glasserman et al. (2000).

There are many more VRTs. Both CRN and AV are intuitively attractive and easy to implement, but the most popular one is CRN. The most useful VRT may be IS. In practice, the other VRTs often do not reduce the variance drastically so many users prefer to spend more computer time instead of applying VRTs. (VRTs are a great topic for doctoral research!) For more details on VRTs, I refer to Kleijnen and Rubinstein (2001).

Finally, the density function of the sample data $ x_{i}$ may not be an academic problem: Suppose a very limited set of historical data is given, and we must analyze these data while we know that these data do not satisfy the classic assumption formulated in (3.2). Then bootstrapping may help, as follows (also remember the six steps above).

  1. Read the bootstrap sample size $ B$ (usual symbol in bootstrapping, comparable with $ m$ - number of macro-replicates - in step 1 above).
  2. Take $ n$ samples with replacement from the original sample $ x_{i}$; this sampling gives $ x_{i}^{\ast } $ (the superscript $ ^{\ast}$ denotes bootstrapped values, to be distinguished from the original values).
  3. From these $ x_{i}^{\ast } $ compute the statistic $ t_{n - 1}^{\ast } $ (see (3.1)).
  4. Repeat steps 2 and 3 $ B$ times.
  5. Sort the $ B$ values of $ t_{n - 1}^{\ast } $.
  6. Compute the EDF from the results in step 5.
In summary, bootstrapping is just a Monte Carlo experiment - using resampling with replacement of a given data set. (There is also a parametric bootstrap, which comes even closer to our simulation of Gossett's original experiment.) Bootstrapping is further discussed in Efron and Tibshirani (1993) and in Chap. III.2 (by Mammen).


next up previous contents index
Next: 3.3 Black-Box Metamodels of Up: 3. Design and Analysis Previous: 3.1 Introduction