next up previous contents index
Next: IV. Selected Applications Up: csahtml Previous: 16.3 Boosting

References

1
Allwein, E., Schapire, R., Singer, Y.: Reducing multiclass to binary: a unifying approach for margin classifiers. J. Machine Learning Research 1, 113-141 (2001).

2
Amit, Y., Geman, D.: Shape quantization and recognition with randomized trees. Neural Computation 9, 1545-1588 (1997).

3
Audrino F., Barone-Adesi G.: A multivariate FGD technique to improve VaR computation in equity markets. To appear in Computational Management Science.

4
Audrino, F., Bühlmann, P.: Volatility estimation with functional gradient descent for very high-dimensional financial time series. J. Computational Finance 6(3), 65-89 (2003).

5
Bartlett, P.L.: Prediction algorithms: complexity, concentration and convexity. In: Proceedings of the 13th IFAC Symposium on System Identification, pp. 1507-1517 (2003).

6
Bartlett, P.L., Jordan, M.I., McAuliffe, J.D.: Convexity, classification, and risk bounds. Technical Report 638, Dept. of Statistics, Univ. of Calif. (2003). Available from http://www.stat.berkeley.edu/tech-reports/index.html

7
Bauer, E., Kohavi, R.: An empirical comparison of voting classification algorithms: bagging, boosting and variants. Machine Learning, 36, 1545-1588 (1999).

8
Benner, A.: Application of ``aggregated classifiers'' in survival time studies. In: COMPSTAT 2002 - Proceedings in Computational Statistics - 15th Symposium held in Berlin (Eds. Härdle, W. and Rönz, B.), Physika Verlag, Heidelberg (2002).

9
Breiman, L.: Bagging predictors. Machine Learning, 24, 123-140 (1996)

10
Breiman, L.: Out-of-bag estimation. Technical Report (1996). Available from ftp://ftp.stat.berkeley.edu/pub/users/breiman/

11
Breiman, L.: Arcing classifiers. Annals of Statistics 26, 801-824 (1998).

12
Breiman, L.: Prediction games & arcing algorithms. Neural Computation 11, 1493-1517 (1999).

13
Breiman, L.: Random Forests. Preprint. Available from
http://stat-www.berkeley.edu/users/breiman/rf.html

14
Breiman, L.: Population theory for boosting ensembles. To appear in Annals of Statistics, 32(1) (2004).

15
Bühlmann, P.: Bagging, subagging and bragging for improving some prediction algorithms. In: Recent Advances and Trends in Nonparametric Statistics (Eds. Akritas, M.G., Politis, D.N.), Elsevier (2003).

16
Bühlmann, P.: Boosting for high-dimensional linear models. Preprint (2004). Available from http://www.stat.math.ethz.ch/~buhlmann/bibliog.html

17
Bühlmann, P., Yu, B: Discussion on Additive logistic regression: a statistical view of boosting (Auths. Friedman,J., Hastie, T., Tibshirani,R.) Annals of Statistics 28, 377-386 (2000).

18
Bühlmann, P., Yu, B.: Analyzing bagging. Annals of Statistics 30, 927-961 (2002).

19
Bühlmann, P., Yu, B.: Boosting with the $ L_2$loss: regression and classification. J. American Statistical Association 98, 324-339 (2003).

20
Buja, A., Stuetzle, W.: Observations on bagging. Preprint (2002). Available from http://ljsavage.wharton.upenn.edu/$ \sim$buja/

21
Bylander, T.:Estimating generalization error on two-class datasets using out-of-bag estimates. Machine Learning 48, 287-297 (2002).

22
Chen, S.S., Donoho, D.L., Saunders, M.A.: Atomic decomposition by basis pursuit. SIAM J. Scientific Computing 20(1), 33-61 (1999).

23
Chen, S.X., Hall, P.: Effects of bagging and bias correction on estimators defined by estimating equations. Statistica Sinica 13, 97-109 (2003).

24
Dettling, M.: Bag-Boosting for tumor classification. In preparation (2004).

25
Dettling, M., Bühlmann, P. Boosting for tumor classification with gene expression data. Bioinformatics 19(9), 1061-1069 (2003).

26
Borra, S., Di Ciaccio, A.: Improving nonparametric regression methods by bagging and boosting. Computational Statistics & Data Analysis 38, 407-420 (2002).

27
Dudoit, S., Fridlyand, J.: Bagging to improve the accuracy of a clustering procedure. Bioinformatics 19(9), 1090-1099 (2003).

28
Efron, B., Tibshirani, R.: The problem of regions. Annals of Statistics 26, 1687-1718 (1998).

29
Efron, B., Hastie, T., Johnstone, I., Tibshirani, R.: Least angle regression. To appear in Annals of Statistics, 32(2) (2004).

30
Freund, Y. (1995): Boosting a weak learning algorithm by majority. Information and Computation 121, 256-285 (1995).

31
Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In Machine Learning: Proc. Thirteenth International Conference, pp. 148-156. Morgan Kauffman, San Francisco (1996).

32
Friedman, J.H.: Multivariate adaptive regression splines. Annals of Statistics 19, 1-141 (with discussion) (1991).

33
Friedman, J.H.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29, 1189-1232 (2001).

34
Friedman, J.H., Hastie, T., Tibshirani, R.: Additive logistic regression: a statistical view of boosting. Annals of Statistics 28, 337-407 (with discussion) (2000).

35
Hastie, T.J., Tibshirani, R.J.: Generalized Additive Models. Chapman & Hall, London (1990).

36
Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning. Data Mining, Inference and Prediction. Springer, New York (2001).

37
Hothorn, T., Lausen, B.: Bundling classifiers by bagging trees. Preprint (2002). Available from http://www.mathpreprints.com/math/Preprint/blausen/ 20021016/1/

38
Hurvich, C.M., Simonoff, J.S., Tsai, C.-L.: Smoothing parameter selection in nonparametric regression using an improved Akaike information criterion. J. Royal Statistical Society, Series B, 60, 271-293 (1998).

39
Jiang, W.: process consistency for AdaBoost. To appear in Annals of Statistics, 32(1) (2004).

40
Lugosi, G., Vayatis, N. On the Bayes-risk consistency of regularized boosting methods. To appear in Annals of Statistics, 32(1) (2004).

41
Mallat, S., Zhang, Z. (1993). Matching pursuits with time-frequency dictionaries. IEEE Transactions Signal Processing 41, 3397-3415 (1993).

42
Mannor, S., Meir, R., Zhang, T.: The consistency of greedy algorithms for classification. Proceedings COLT02, Vol. 2375 of LNAI, pp. 319-333, Sydney, Springer (2002).

43
Mason, L., Baxter, J. Bartlett, P., Frean, M.: Functional gradient techniques for combining hypotheses. In: advances in Large Margin Classifiers (Eds. Smola, A.J., Bartlett, P.J., Schölkopf, B., Schuurmans, D.). MIT Press, Cambridge, MA (2000).

44
Politis, D.N., Romano, J.P., Wolf, M.: Subsampling. Springer, New York (1999).

45
Ridgeway, G.: Looking for lumps: boosting and bagging for density estimation. Computational Statistics and Data Analysis 38(4), 379-392 (2002).

46
Rosset, S., Zhu, J., Hastie, T. Margin maximizing loss functions. Accepted poster for NIPS (2003). Available from
http://www-stat.stanford.edu/$ \sim$hastie/pub.htm

47
Schapire, R.E.: The strength of weak learnability. Machine Learning 5, 197-227 (1990).

48
Schapire, R.E.: The boosting approach to machine learning: an overview. In: MSRI Workshop on Nonlinear Estimation and Classification (Eds. Denison, D.D., Hansen, M.H., Holmes, C.C., Mallick, B., Yu, B). Springer, New York (2002).

49
Schölkopf, B., Smola, A.J.: Learning with Kernels. MIT Press, Cambridge (2002).

50
Tibshirani, R.: Regression shrinkage and selection via the lasso. J. Royal Statistical Society, Series B, 58, 267-288 (1996).

51
Tukey, J.W.: Exploratory data analysis. Addison-Wesley, Reading, MA (1977).

52
Tutz, G., Hechenbichler, K.: Aggregating Classifiers With Ordinal Response Structure, SFB 386 Discussion Paper No. 359 (2003). Available from http://www.stat.uni-muenchen.de/sfb386/

53
Vapnik, V.N.: Statistical Learning Theory. Wiley, New York (1998).

54
Wahba, G.: Spline Models for Observational Data. Society for Industrial and Applied Mathematics (1990).

55
Zhang, T., Yu, B.: Boosting with early stopping: convergence and consistency. Technical Report 635, Dept. of Statistics, Univ. of Calif., Berkeley (2003). Available from
http://www.stat.berkeley.edu/users/binyu/publications.html

56
Zhu, J., Rosset, S., Hastie, T., Tibshirani, R.: 1-norm support vector machines. Accepted spotlight poster for NIPS (2003). Available from http://www-stat.stanford.edu/$ \sim$hastie/pub.htm