next up previous contents index
Next: 2.3 Linear Recurrences Modulo Up: 2. Random Number Generation Previous: 2.1 Introduction

Subsections



2.2 Uniform Random Number Generators

2.2.1 Physical Devices

Random numbers can be generated via physical mechanisms such as the timing between successive events in atomic decay, thermal noise in semiconductors, and the like. A key issue when constructing a RNG based on a physical device is that a ''random'' or ''chaotic'' output does not suffice; the numbers produced must be, at least to a good approximation, realizations of independent and uniformly distributed random variables. If the device generates a stream of bits, which is typical, then each bit should be 0 or $ 1$ with equal probability, and be independent of all the other bits. In general, this cannot be proved, so one must rely on the results of empirical statistical testing to get convinced that the output values have the desired statistical behavior. Not all these devices are reliable, but some apparently are. I did test two of them recently and they passed all statistical tests that I tried.

For computational statistics, physical devices have several disadvantages compared to a good algorithmic RNG that stands in a few lines of code. For example, (a) they are much more cumbersome to install and run; (b) they are more costly; (c) they are slower; (d) they cannot reproduce exactly the same sequence twice. Item (d) is important in several contexts, including program verification and debugging as well as comparison of similar systems by simulation with common random numbers to reduce the variance ([3,21,34]). Nevertheless, these physical RNGs can be useful for selecting the seed of an algorithmic RNG, more particularly for applications in cryptology and for gaming machines, where frequent reseeding of the RNG with an external source of entropy (or randomness) is important. A good algorithmic RNG whose seed is selected at random can be viewed as an extensor of randomness, stretching a short random seed into a long sequence of pseudorandom numbers.

2.2.2 Generators Based on a Deterministic Recurrence

RNGs used for simulation and other statistical applications are almost always based on deterministic algorithms that fit the following framework, taken from [36]: a RNG is a structure $ ({\mathcal{S}}, \mu, f, {\mathcal{U}}, g)$ where $ {\mathcal{S}}$ is a finite set of states (the state space), $ \mu$ is a probability distribution on $ {\mathcal{S}}$ used to select the initial state (or seed$ s_0$, $ f : {\mathcal{S}}\to {\mathcal{S}}$ is the transition function, $ {\mathcal{U}}$ is the output space, and $ g : {\mathcal{S}}\to {\mathcal{U}}$ is the output function. Usually, $ {\mathcal{U}} = (0,1)$, and we shall assume henceforth that this is the case. The state of the RNG evolves according to the recurrence $ s_i = f(s_{i-1})$, for $ i \ge 1$, and the output at step $ i$ is $ u_i = g(s_i)\in{\mathcal{U}}$. The output values $ u_0,u_1,u_2,\ldots$ are the so-called random numbers produced by the RNG.

Because $ {\mathcal{S}}$ is finite, there must be some finite $ l\ge 0$ and $ j >
0$ such that $ s_{l+j} = s_{l}$. Then, for all $ i\ge l$, one has $ s_{i+j} = s_{i}$ and $ u_{i+j} = u_{i}$, because both $ f$ and $ g$ are deterministic. That is, the state and output sequences are eventually periodic. The smallest positive $ j$ for which this happens is called the period length of the RNG, and is denoted by $ \rho$. When $ l=0$, the sequence is said to be purely periodic. Obviously, $ \rho\le \vert{\mathcal{S}}\vert$, the cardinality of $ {\mathcal{S}}$. If the state has a $ k$-bit representation on the computer, then $ \rho\le 2^k$. Good RNGs are designed so that their period length $ \rho$ is not far from that upper bound. In general, the value of $ \rho$ may depend on the seed $ s_0$, but good RNGs are normally designed so that the period length is the same for all admissible seeds.

In practical implementations, it is important that the output be strictly between 0 and $ 1$, because $ F^{\,-1}(U)$ is often infinite when $ U$ is 0 or $ 1$. All good implementations take care of that. However, for the mathematical analysis of RNGs, we often assume that the output space is $ [0,1)$ (i.e., 0 is admissible), because this simplifies the analysis considerably without making much difference in the mathematical structure of the generator.

2.2.3 Quality Criteria

What important quality criteria should we consider when designing RNGs? An extremely long period is obviously essential, to make sure that no wrap-around over the cycle can occur in practice. The length of the period must be guaranteed by a mathematical proof. The RNG must also be efficient (run fast and use only a small amount of memory), repeatable (able to reproduce exactly the same sequence as many times as we want), and portable (work the same way in different software/hardware environments). The availability of efficient jump-ahead methods that can quickly compute $ s_{i+\nu}$ given $ s_i$, for any large $ \nu $ and any $ i$, is also very useful, because it permits one to partition the RNG sequence into long disjoint streams and substreams of random numbers, in order to create an arbitrary number of virtual generators from a single RNG ([34,59]). These virtual generators can be used on parallel processors or to support different sources of randomness in a large simulation model, for example.

Consider a RNG with state space $ {\mathcal{S}} = \{1,\ldots,2^{1000}-1\}$, transition function $ s_{i+1} = f(s_i) = (s_i + 1)\, {\mathop{\text{mod}}}\, 2^{1000}$, and $ u_i = g(s_i) = s_i/2^{1000}$. This RNG has period length $ 2^{1000}$ and enjoys all the nice properties described in the preceding paragraph, but is far from imitating ''randomness''. In other words, these properties are not sufficient.

A sequence of real-valued random variables $ u_0,u_1,u_2,\ldots$ are i.i.d. $ U(0,1)$ if and only if for every integers $ i\ge 0$ and $ t>0$, the vector $ \boldsymbol{u}_{i,t} = (u_i,\ldots,u_{i+t-1})$ is uniformly distributed over the $ t$-dimensional unit hypercube $ (0,1)^t$. Of course, this cannot hold for algorithmic RNGs because any vector of $ t$ successive values produced by the generator must belong to the finite set

$\displaystyle \Psi_t = \{(u_0,\ldots,u_{t-1}) : s_0\in {\mathcal{S}}\}\;,$    

which is the set of all vectors of $ t$ successive output values, from all possible initial states. Here we interpret $ \Psi_t$ as a multiset, which means that the vectors are counted as many times as they appear, and the cardinality of $ \Psi_t$ is exactly equal to that of  $ {\mathcal{S}}$.

Suppose we select the seed $ s_0$ at random, uniformly over $ {\mathcal{S}}$. This can be approximated by using some physical device, for example. Then, the vector $ \boldsymbol{u}_{0,t}$ has the uniform distribution over the finite set $ \Psi_t$. And if the sequence is purely periodic for all $ s_0$, $ \boldsymbol{u}_{i,t} = (u_i,\ldots,u_{i+t-1})$ is also uniformly distributed over $ \Psi_t$ for all $ i\ge 0$. Since the goal is to approximate the uniform distribution over $ (0,1)^t$, it immediately becomes apparent that $ \Psi_t$ should be evenly spread over this unit hypercube. In other words, $ \Psi_t$ approximates $ (0,1)^t$ as the sample space from which the vectors of successive output values are drawn randomly, so it must be a good approximation of $ (0,1)^t$ in some sense. The design of good-quality RNGs must therefore involve practical ways of measuring the uniformity of the corresponding sets $ \Psi_t$ even when they have huge cardinalities. In fact, a large state space $ {\mathcal{S}}$ is necessary to obtain a long period, but an even more important reason for having a huge number of states is to make sure that $ \Psi_t$ can be large enough to provide a good uniform coverage of the unit hypercube, at least for moderate values of $ t$.

More generally, we may also want to measure the uniformity of sets of the form

$\displaystyle \Psi_I = \left\{\left(u_{i_1},\ldots,u_{i_t}\right) \mid s_0 \in {\mathcal{S}}\right\}\;,$    

where $ I = \{i_1,\cdots, i_t\}$ is a fixed set of non-negative integers such that $ 0 \le i_1 < \cdots < i_t$. As a special case, we recover $ \Psi_t = \Psi_I$ when $ I = \{0,\ldots,t-1\}$.

The uniformity of a set $ \Psi_I$ is typically assessed by measuring the discrepancy between the empirical distribution of its points and the uniform distribution over $ (0,1)^t$ ([80,25,53]). Discrepancy measures are equivalent to goodness-of-fit test statistics for the multivariate uniform distribution. They can be defined in many different ways. In fact, the choice of a specific definition typically depends on the mathematical structure of the RNG to be studied and the reason for this is very pragmatic: we must be able to compute these measures quickly even when $ {\mathcal{S}}$ has very large cardinality. This obviously excludes any method that requires explicit generation of the sequence over its entire period. The selected discrepancy measure is usually computed for each set $ I$ in some predefined class  $ {\mathcal{J}}$, these values are weighted or normalized by factors that depend on $ I$, and the worst-case (or average) over $ {\mathcal{J}}$ is adopted as a figure of merit used to rank RNGs. The choice of $ {\mathcal{J}}$ and of the weights are arbitrary. Typically, $ {\mathcal{J}}$ would contain sets $ I$ such that $ t$ and $ i_t-i_1$ are rather small. Examples of such figures of merit will be given when we discuss specific classes of RNGs.

2.2.4 Statistical Testing

Good RNGs are designed based on mathematical analysis of their properties, then implemented and submitted to batteries of empirical statistical tests. These tests try to detect empirical evidence against the null hypothesis $ {\mathcal{H}}_0$: ''the $ u_i$ are realizations of i.i.d. $ U(0,1)$ random variables''. A test can be defined by any function $ T$ that maps a sequence $ u_0,u_1,\ldots$ in $ (0,1)$ to a real number $ X$, and for which a good approximation is available for the distribution of the random variable $ X$ under  $ {\mathcal{H}}_0$. For the test to be implementable, $ X$ must depend on only a finite (but perhaps random) number of $ u_i$'s. Passing many tests may improve one's confidence in the RNG, but never guarantees that the RNG is foolproof for all kinds of simulations.

Building a RNG that passes all statistical tests is an impossible dream. Consider, for example, the class of all tests that examine the first (most significant) $ b$ bits of $ n$ successive output values, $ u_0,\ldots,u_{n-1}$, and return a binary value $ X\in
\{0,1\}$. Select $ \alpha\in (0,1)$ so that $ \alpha b^n$ is an integer and let $ {\mathcal{T}}_{n,b,\alpha}$ be the tests in this class that return $ X=1$ for exactly $ \alpha b^n$ of the $ b^n$ possible output sequences. We may say that the sequence fails the test when $ X=1$. The number of tests in $ {\mathcal{T}}_{n,b,\alpha}$ is equal to the number of ways of choosing $ \alpha b^n$ distinct objects among $ b^n$. The chosen objects are the sequences that fail the test. Now, for any given output sequence, the number of such tests that return $ 1$ for this particular sequence is equal to the number of ways of choosing the other $ \alpha b^n -1$ sequences that also fail the test. This is the number of ways of choosing $ \alpha b^n -1$ distinct objects among $ b^n-1$. In other words, as pointed out by [64], every output sequence fails exactly the same number of tests! This result should not be surprising. Viewed from a different angle, it is essentially a restatement of the well-known fact that under  $ {\mathcal{H}}_0$, each of the $ b^n$ possible sequences has the same probability of occurring, so one could argue that none should be considered more random than any other ([31]).

This viewpoint seems to lead into a dead end. For statistical testing to be meaningful, all tests should not be considered on equal footing. So which ones are more important? Any answer is certainly tainted with its share of arbitrariness. However, for large values of $ n$, the number of tests is huge and all but a tiny fraction are too complicated even to be implemented. So we may say that bad RNGs are those that fail simple tests, whereas good RNGs fail only complicated tests that are hard to find and run. This common-sense compromise has been generally adopted in one way or another.

Experience shows that RNGs with very long periods, good structure of their set $ \Psi_t$, and based on recurrences that are not too simplistic, pass most reasonable tests, whereas RNGs with short periods or bad structures are usually easy to crack by standard statistical tests. For sensitive applications, it is a good idea, when this is possible, to apply additional statistical tests designed in close relation with the random variable of interest (e.g., based on a simplification of the stochastic model being simulated, and for which the theoretical distribution can be computed).

Our discussion of statistical tests continues in Sect. 2.6.

2.2.5 Cryptographically Strong Generators

One way of defining an ideal RNG would be that no statistical test can distinguish its output sequence from an i.i.d. $ U(0,1)$ sequence. If an unlimited computing time is available, no finite-state RNG can statisfy this requirement, because by running it long enough one can eventually figure out its periodicity. But what if we impose a limit on the computing time? This can be analyzed formally in the framework of asymptotic computational complexity theory, under the familiar ''rough-cut'' assumption that polynomial-time algorithms are practical and others are not.

Consider a family of RNGs $ \{{\mathcal{G}}_k = ({\mathcal{S}}_k, \mu_k, f_k, {\mathcal{U}}_k,
g_k),\, k=1,2,\ldots \}$ where $ {\mathcal{S}}_k$ is of cardinality $ 2^k$ (i.e., $ {\mathcal{G}}_k$ has a $ k$-bit state). Suppose that the transition and output functions $ f$ and $ g$ can be computed in time bounded by a polynomial in $ k$. Let $ {\mathcal{T}}$ be the class of statistical tests that run in time bounded by a polynomial in $ k$ and try to differentiate between the output sequence of the RNG and an i.i.d. $ U(0,1)$ sequence. The RNG family is called polynomial-time perfect if there is a constant $ \epsilon > 0$ such that for all $ k$, no test in $ {\mathcal{T}}$ can differentiate correctly with probability larger than $ 1/2 +
e^{-k\epsilon}$. This is equivalent to asking that no polynomial-time algorithm can predict any given bit of $ u_i$ with probability of success larger than $ 1/2 +
e^{-k\epsilon}$, after observing $ u_0,\ldots, u_{i-1}$. This links unpredictability with statistical uniformity and independence. For the proofs and additional details, see, e.g. [2,55,33], and [69]. This theoretical framework has been used to define a notion of reliable RNG in the context of cryptography. But the guarantee is only asymptotic; it does not necessarily tell what value of $ k$ is large enough for the RNG to be secure in practice. Moreover, specific RNG families have been proved to be polynomial-time perfect only under yet unproven conjectures. So far, no one has been able to prove even their existence. Most RNGs discussed in the remainder of this chapter are known not to be polynomial-time perfect. However, they are fast, convenient, and have good enough statistical properties when their parameters are chosen carefully.


next up previous contents index
Next: 2.3 Linear Recurrences Modulo Up: 2. Random Number Generation Previous: 2.1 Introduction