3. Linear separability
• Linear separability
– In general, two groups are linearly separable in ndimensional space if they can be separated by an
(n − 1)-dimensional hyperplane.
9. Soft Margin
• Choose a hyperplane that splits the examples
as cleanly as possible
• Still maximizing the distance to the nearest
cleanly split examples
• Introduce an error cost C
d*C
11. Kernel Trick
• Build maximal margin hyperplanes in highdimenisonal feature space depends on inner
product: more cost
• Use a kernel function that lives in low
dimensions, but behaves like an inner product
in high dimensions
19. Experimental results
• The results of SVMs with various C where δ2 is fixed
at 25
• Too small C
• underfitting*
• Too large C
• overfitting*
* F.E.H. Tay, L. Cao, Application of support vector machines in -nancial time series forecasting, Omega 29 (2001) 309–317
20. Experimental results
• The results of SVMs with various δ2 where C is fixed
at 78
• Small value of δ2
• overfitting*
• Large value of δ2
• underfitting*
* F.E.H. Tay, L. Cao, Application of support vector machines in -nancial time series forecasting, Omega 29 (2001) 309–317
21. Experimental results and conclusion
• SVM outperformes BPN and CBR
• SVM minimizes structural risk
• SVM provides a promising alternative for
financial time-series forecasting
• Issues
– parameter tuning