### Related Reading

Stanford Machine Learning Lecture Note: Support Vector Machine

Stanford Machine Learning Lecture Note: Support Vector Machine (local copy)

### Reading Notes

A general note. In SVM, the class is labed using +1/-1. More precisely, \(y_i \in \{+1, -1\} \).

#### Margin

- Functional margin: \(\hat{\gamma}^{(i)} = y^{(i)}(w^T x + b) \)
- Geometric margin: \(\gamma^{(i)} = y^{(i)}(\frac{w^T x}{||w||} + \frac{b}{||w||}) \)

#### Separating hyperplane and sign

#### Optimization Problem

In the lecture note, three forms of the optimization problem are presented. Note that each formulation consists of two parts

- objective function
- constraints

The first formulation is based on geometric margin. The objective function and constraints are both defined using geometric margin \(\gamma\).

The second formulation is a mixed form. The objective function is still based on geometric margin (recall that we have \( \gamma = \frac{\hat{\gamma}} {||w||} \)) while the constraints are defined using functional margin. This transformation is valid because in the first formulation, we have constraint \( ||w|| = 1 \) therefore we can replace the geometric margin constraints with functional margin constraints.

The third formulation is the following. The idea is that we can scale \(w\) and \(b\) such that \(\hat{\gamma}\) equals to 1.

Finally, for non-separable case, we have

#### Hinge Loss

Hinge loss is defined as the following

This is related to the constraints in the non-separable case. In fact, the loss is a reformulation of the constraints.

----- END -----

©2019 - 2021 all rights reserved

## Comments