next up previous contents index
Next: References Up: 6. Stochastic Optimization Previous: 6.4 Genetic Algorithms


6.5 Concluding Remarks

Stochastic optimization is a major branch of computational statistics. This paper has been a whirlwind tour through some important issues and methods in stochastic optimization. Stochastic optimization applies when there are noisy measurements of the criterion being optimized and/or there is an injected Monte Carlo randomness as part of the algorithm. Of necessity, we cover only a small fraction of available methods in this relatively brief review, although the methods described (random search, stochastic approximation, and genetic algorithms) are representative of a broad range of important and widely used algorithms. Further, the treatment here on the specific algorithms is relatively brief. In particular, the subjects covered in this paper of approximately 30 pages are treated in over 160 pages in Spall (2003, Chaps. 1-2, 6-7, and 9-10) and are given an even more detailed treatment in the many specialized books or other references.

There are many challenges to carrying out real-world optimization, including the presence of noise in the function evaluations, the difficulties in distinguishing a globally optimal solution from locally optimal solutions, the ''curse of dimensionality,'' the difficulties associated with nontrivial constraints, and the lack of stationarity in the solution as a result of the conditions of the problem changing over time. Stochastic optimization methods are especially useful in treating some of these challenges. In particular, by definition, they are designed for noisy function evaluations. Further, when considering injected (Monte Carlo) randomness (property II in Sect. 6.1.3), certain stochastic optimization algorithms will (under conditions, of course) serve as global optimizers. That is, the injected randomness provides enough ''bounce'' to the algorithm to allow for escape from local minima en route to achieving a global minimum.

In summary, while classical deterministic optimization methods (linear and nonlinear programming) are effective for a range of problems, stochastic methods are able to handle many of the problems for which deterministic methods are inappropriate. It is hoped that this summary gives the reader a flavor of the issues, algorithms, and challenges in carrying out optimization in the face of stochastic effects.

Acknowledgements. I appreciate the helpful comments of Dr. Stacy Hill on a draft version of this paper. Funding was provided by the U. S. Navy (contract N00024-98-D-8124) and the JHU/APL Independent Research and Development (IRAD) Program. Selected parts of this article have been reprinted, by permission, from J.C. Spall, Introduction to Stochastic Search and Optimization, ©2003 by John Wiley and Sons, Inc.


next up previous contents index
Next: References Up: 6. Stochastic Optimization Previous: 6.4 Genetic Algorithms