Next:
15.1 Introduction
Up:
References
Previous:
References
15. Support Vector Machines
Sebastian Mika, Christin Schäfer, Pavel Laskov, David Tax,
Klaus-Robert Müller
Subsections
15.1 Introduction
15.2 Learning from Examples
15.2.1 General Setting of Statistical Learning
15.2.2 Desirable Properties for Induction Principles
15.2.2.1 Regularization
15.2.2.2 Consistency
15.2.3 Structural Risk Minimization
15.3 Linear SVM: Learning Theory in Practice
15.3.1 Linear Separation Planes
15.3.2 Canonical Hyperplanes and Margins
15.4 Non-linear SVM
15.4.1 The Kernel Trick
15.4.2 Feature Spaces
15.4.2.1 The Feature Map
15.4.2.2 Mercer Kernels
15.4.3 Properties of Kernels
15.5 Implementation of SVM
15.5.1 Basic Formulations
15.5.1.1 Separable Data
15.5.1.2 Non-separable Data
15.5.1.3 Sparsity
15.5.2 Decomposition
15.5.2.1 Basic Principles
15.5.2.2 Working Set Selection: Feasible Direction Algorithms
15.5.2.3 Sequential Minimal Optimization
15.5.3 Incremental Support Vector Optimization
15.6 Extensions of SVM
15.6.1 Regression
15.6.2 One-Class Classification
15.7 Applications
15.7.1 Optical Character Recognition (OCR)
15.7.2 Text Categorization and Text Mining
15.7.3 Active Learning in Drug Design
15.7.4 Other Applications
15.8 Summary and Outlook