next up previous contents index
Next: 2.2 Uniform Random Number Up: 2. Random Number Generation Previous: 2. Random Number Generation

2.1 Introduction

The fields of probability and statistics are built over the abstract concepts of probability space and random variable. This has given rise to elegant and powerful mathematical theory, but exact implementation of these concepts on conventional computers seems impossible. In practice, random variables and other random objects are simulated by deterministic algorithms. The purpose of these algorithms is to produce sequences of numbers or objects whose behavior is very hard to distinguish from that of their ''truly random'' counterparts, at least for the application of interest. Key requirements may differ depending on the context. For Monte Carlo methods, the main goal is to reproduce the statistical properties on which these methods are based, so that the Monte Carlo estimators behave as expected, whereas for gambling machines and cryptology, observing the sequence of output values for some time should provide no practical advantage for predicting the forthcoming numbers better than by just guessing at random.

In computational statistics, random variate generation is usually made in two steps: (1) generating imitations of independent and identically distributed (i.i.d.) random variables having the uniform distribution over the interval $ (0,1)$ and (2) applying transformations to these i.i.d. $ U(0,1)$ random variates in order to generate (or imitate) random variates and random vectors from arbitrary distributions. These two steps are essentially independent and the world's best experts on them are two different groups of scientists, with little overlap. The expression (pseudo)random number generator (RNG) usually refers to an algorithm used for Step (1).

In principle, the simplest way of generating a random variate $ X$ with distribution function $ F$ from a $ U(0,1)$ random variate $ U$ is to apply the inverse of $ F$ to $ U$:

$\displaystyle X = F^{\,-1}(U) {\stackrel {\text{def}}{=}} \min\{ x \mid F(x) \ge U\}\;.$ (2.1)

This is the inversion method. It is easily seen that $ X$ has the desired distribution: $ P[X\le x] =
P[F^{\,-1}(U) \le x] = P[U\le F(x)] = F(x)$. Other methods are sometimes preferable when $ F^{\,-1}$ is too difficult or expensive to compute, as will be seen later.

The remainder of this chapter is organized as follows. In the next section, we give a definition and the main requirements of a uniform RNG. Generators based on linear recurrences modulo a large integer $ m$, their lattice structure and quality criteria, and their implementation, are covered in Sect. 2.3. In Sect. 2.4, we have a similar discussion for RNGs based on linear recurrences modulo $ 2$. Nonlinear RNGs are briefly presented in Sect. 2.5. In Sect. 2.6, we discuss empirical statististical testing of RNGs and give some examples. Section 2.7 contains a few pointers to recommended RNGs and software. In Sect. 2.8, we cover non-uniform random variate generators. We first discuss inversion and its implementation in various settings. We then explain the alias, rejection, ratio-of-uniform, composition, and convolution methods, and provide pointers to the several other methods that apply in special cases.

Important basic references that we recommend are [31], L'Ecuyer (1994, 1998), [80], and [88] for uniform RNGs, and [16], [23], and [29] for non-uniform RNGs.


next up previous contents index
Next: 2.2 Uniform Random Number Up: 2. Random Number Generation Previous: 2. Random Number Generation