next up previous contents index
Next: 3.2 Simulation Techniques in Up: 3. Design and Analysis Previous: 3. Design and Analysis

3.1 Introduction

By definition, computer simulation (or Monte Carlo) models are not solved by mathematical analysis (for example, differential calculus), but are used for numerical experimentation. These experiments are meant to answer questions of interest about the real world; i.e., the experimenters may use their simulation model to answer what if questions - this is also called sensitivity analysis. Sensitivity analysis - guided by the statistical theory on design of experiments (DOE) - is the focus of this chapter. Sensitivity analysis may further serve validation, optimization, and risk (or uncertainty) analysis for finding robust solutions; see Kleijnen (1998), Kleijnen et al. (2003a,b). Note that optimization is also discussed at length in Chap. II.6 by Spall.

Though I assume that the reader is familiar with basic simulation, I shall summarize a simple Monte Carlo example (based on the well-known Student $ t$ statistic) in Sect. 3.2. This example further illustrates bootstrap and variance reduction techniques

Further, I assume that the reader 's familiarity with DOE is restricted to elementary DOE. In this chapter, I summarize classic DOE, and extend it to newer methods (for example, DOE for interpolation using Kriging; Kriging is named after the South-African mining engineer D.G. Krige).

Traditionally, 'the shoemaker's children go barefoot'; i.e., users of computational statistics ignore statistical issues - such as sensitivity analysis - of their simulation results. Nevertheless, they should address tactical issues - the number of (macro)replicates, variance reduction techniques - and strategic issues - situations to be simulated and the sensitivity analysis of the resulting data. Both types of issues are addressed in this chapter.

Note the following terminology. DOE speaks of 'factors' with 'levels', whereas simulation analysts may speak of 'inputs` or `parameters' with 'values'. DOE talks about 'design points' or 'runs', whereas simulationists may talk about 'situations', 'cases', or 'scenarios'.

Classic DOE methods for real, non-simulated systems were developed for agricultural experiments in the 1930s, and - since the 1950s - for experiments in engineering, psychology, etc. (Classic designs include fractional factorials, as we shall see.) In those real systems it is impractical to experiment with 'many' factors; $ k = 10$ factors seems a maximum. Moreover, it is then hard to experiment with factors that have more than 'a few' values; five values per factor seems a maximum. Finally, these experiments are run in 'one shot' - for example, in one growing season - and not sequentially. In simulation, however, these limitations do not hold!

Two textbooks on classic DOE for simulation are Kleijnen (1975, 1987). An update is Kleijnen (1998). A bird-eye's view of DOE in simulation is Kleijnen et al. (2003a), which covers a wider area than this review.

Note further the following terminology. I speak of the Monte Carlo method whenever (pseudo)random numbers are used; for example, I apply the Monte Carlo method to estimate the behavior of the $ t$ statistic in case of non-normality, in Sect. 3.2 (the Monte Carlo method may also be used to estimate multiple integrals, which is a deterministic problem, outside the scope of this handbook). I use the term simulation whenever the analysts compute the output of a dynamic model; i.e., the analysts do not use calculus to find the solution of a set of differential or difference equations. The dynamic model may be either stochastic or deterministic. Stochastic simulation uses the Monte Carlo method; it is often applied in telecommunications and logistics. Deterministic simulation is often applied in computer-aided engineering (CAE). Finally, I use the term metamodel for models that approximate - or model - the input/output (I/O) behavior of the underlying simulation model; for example, a polynomial regression model is a popular metamodel (as we shall see). Metamodels are used - consciously or not - to design and analyze experiments with simulation models. In the simulation literature, metamodels are also called response surfaces, emulators, etc.

The remainder of this chapter is organized as follows. Section 3.2 presents a simple Monte Carlo experiment with Student's $ t$ statistic, including bootstrapping and variance reduction techniques. Section 3.3 discusses the black box approach to simulation experiments, and corresponding metamodels - especially, polynomial and Kriging models. Section 3.4 starts with simple regression models with a single factor; proceeds with designs for multiple factors including designs for first-order and second-order polynomial models, and concludes with screening designs for hundreds of factors. Section 3.5 introduces Kriging interpolation, which has hardly been applied in random simulation - but has already established a track record in deterministic simulation and spatial statistics. Kriging often uses space-filling designs, such as Latin hypercube sampling (LHS). Section 3.6 gives conclusions and further research topics.


next up previous contents index
Next: 3.2 Simulation Techniques in Up: 3. Design and Analysis Previous: 3. Design and Analysis