Support Vector Machine

Subscribe Send me a message home page tags


Related Reading

Stanford Machine Learning Lecture Note: Support Vector Machine

Stanford Machine Learning Lecture Note: Support Vector Machine (local copy)

Reading Notes

A general note. In SVM, the class is labed using +1/-1. More precisely, \(y_i \in \{+1, -1\} \).

Margin

Separating hyperplane and sign

separating_hyperplane.png

Optimization Problem

In the lecture note, three forms of the optimization problem are presented. Note that each formulation consists of two parts

The first formulation is based on geometric margin. The objective function and constraints are both defined using geometric margin \(\gamma\).

optimization_geometric.png

The second formulation is a mixed form. The objective function is still based on geometric margin (recall that we have \( \gamma = \frac{\hat{\gamma}} {||w||} \)) while the constraints are defined using functional margin. This transformation is valid because in the first formulation, we have constraint \( ||w|| = 1 \) therefore we can replace the geometric margin constraints with functional margin constraints.

optimization_functional.png

The third formulation is the following. The idea is that we can scale \(w\) and \(b\) such that \(\hat{\gamma}\) equals to 1.

optimization_final.png

Finally, for non-separable case, we have

non_separable_case.png

Hinge Loss

Hinge loss is defined as the following

$$ Loss_{hinge} = max(0, 1 - y(\omega^T x + b)) $$

This is related to the constraints in the non-separable case. In fact, the loss is a reformulation of the constraints.

----- END -----

Welcome to join reddit self-learning community.
Send me a message Subscribe to blog updates

Want some fun stuff?

/static/shopping_demo.png