2014-11-27 20:39:05 +08:00
|
|
|
Introduction to Support Vector Machines {#tutorial_introduction_to_svm}
|
|
|
|
=======================================
|
|
|
|
|
2020-12-08 00:13:54 +08:00
|
|
|
@tableofcontents
|
|
|
|
|
2023-06-15 03:21:38 +08:00
|
|
|
@prev_tutorial{tutorial_barcode_detect_and_decode}
|
2020-05-20 06:59:28 +08:00
|
|
|
@next_tutorial{tutorial_non_linear_svms}
|
|
|
|
|
2020-12-05 06:46:00 +08:00
|
|
|
| | |
|
|
|
|
| -: | :- |
|
|
|
|
| Original author | Fernando Iglesias García |
|
|
|
|
| Compatibility | OpenCV >= 3.0 |
|
|
|
|
|
2014-11-27 20:39:05 +08:00
|
|
|
Goal
|
|
|
|
----
|
|
|
|
|
|
|
|
In this tutorial you will learn how to:
|
|
|
|
|
2014-11-28 00:54:13 +08:00
|
|
|
- Use the OpenCV functions @ref cv::ml::SVM::train to build a classifier based on SVMs and @ref
|
|
|
|
cv::ml::SVM::predict to test its performance.
|
2014-11-27 20:39:05 +08:00
|
|
|
|
|
|
|
What is a SVM?
|
|
|
|
--------------
|
|
|
|
|
|
|
|
A Support Vector Machine (SVM) is a discriminative classifier formally defined by a separating
|
|
|
|
hyperplane. In other words, given labeled training data (*supervised learning*), the algorithm
|
|
|
|
outputs an optimal hyperplane which categorizes new examples.
|
|
|
|
|
|
|
|
In which sense is the hyperplane obtained optimal? Let's consider the following simple problem:
|
|
|
|
|
|
|
|
For a linearly separable set of 2D-points which belong to one of two classes, find a separating
|
|
|
|
straight line.
|
|
|
|
|
2014-11-28 21:21:28 +08:00
|
|
|
![](images/separating-lines.png)
|
2014-11-27 20:39:05 +08:00
|
|
|
|
|
|
|
@note In this example we deal with lines and points in the Cartesian plane instead of hyperplanes
|
|
|
|
and vectors in a high dimensional space. This is a simplification of the problem.It is important to
|
|
|
|
understand that this is done only because our intuition is better built from examples that are easy
|
|
|
|
to imagine. However, the same concepts apply to tasks where the examples to classify lie in a space
|
2014-11-28 21:21:28 +08:00
|
|
|
whose dimension is higher than two.
|
|
|
|
|
2015-02-11 18:24:14 +08:00
|
|
|
In the above picture you can see that there exists multiple lines that offer a solution to the
|
|
|
|
problem. Is any of them better than the others? We can intuitively define a criterion to estimate
|
|
|
|
the worth of the lines: <em> A line is bad if it passes too close to the points because it will be
|
|
|
|
noise sensitive and it will not generalize correctly. </em> Therefore, our goal should be to find
|
|
|
|
the line passing as far as possible from all points.
|
2014-11-27 20:39:05 +08:00
|
|
|
|
|
|
|
Then, the operation of the SVM algorithm is based on finding the hyperplane that gives the largest
|
|
|
|
minimum distance to the training examples. Twice, this distance receives the important name of
|
|
|
|
**margin** within SVM's theory. Therefore, the optimal separating hyperplane *maximizes* the margin
|
|
|
|
of the training data.
|
|
|
|
|
2014-11-28 21:21:28 +08:00
|
|
|
![](images/optimal-hyperplane.png)
|
2014-11-27 20:39:05 +08:00
|
|
|
|
|
|
|
How is the optimal hyperplane computed?
|
|
|
|
---------------------------------------
|
|
|
|
|
|
|
|
Let's introduce the notation used to define formally a hyperplane:
|
|
|
|
|
2014-11-28 00:54:13 +08:00
|
|
|
\f[f(x) = \beta_{0} + \beta^{T} x,\f]
|
2014-11-27 20:39:05 +08:00
|
|
|
|
|
|
|
where \f$\beta\f$ is known as the *weight vector* and \f$\beta_{0}\f$ as the *bias*.
|
|
|
|
|
2020-03-21 08:25:49 +08:00
|
|
|
@note A more in depth description of this and hyperplanes you can find in the section 4.5 (*Separating
|
2014-11-27 20:39:05 +08:00
|
|
|
Hyperplanes*) of the book: *Elements of Statistical Learning* by T. Hastie, R. Tibshirani and J. H.
|
2015-02-11 18:24:14 +08:00
|
|
|
Friedman (@cite HTF01).
|
2014-11-28 21:21:28 +08:00
|
|
|
|
|
|
|
The optimal hyperplane can be represented in an infinite number of different ways by
|
2014-11-27 20:39:05 +08:00
|
|
|
scaling of \f$\beta\f$ and \f$\beta_{0}\f$. As a matter of convention, among all the possible
|
|
|
|
representations of the hyperplane, the one chosen is
|
|
|
|
|
2014-11-28 00:54:13 +08:00
|
|
|
\f[|\beta_{0} + \beta^{T} x| = 1\f]
|
2014-11-27 20:39:05 +08:00
|
|
|
|
|
|
|
where \f$x\f$ symbolizes the training examples closest to the hyperplane. In general, the training
|
|
|
|
examples that are closest to the hyperplane are called **support vectors**. This representation is
|
|
|
|
known as the **canonical hyperplane**.
|
|
|
|
|
|
|
|
Now, we use the result of geometry that gives the distance between a point \f$x\f$ and a hyperplane
|
|
|
|
\f$(\beta, \beta_{0})\f$:
|
|
|
|
|
|
|
|
\f[\mathrm{distance} = \frac{|\beta_{0} + \beta^{T} x|}{||\beta||}.\f]
|
|
|
|
|
|
|
|
In particular, for the canonical hyperplane, the numerator is equal to one and the distance to the
|
|
|
|
support vectors is
|
|
|
|
|
2014-11-28 00:54:13 +08:00
|
|
|
\f[\mathrm{distance}_{\text{ support vectors}} = \frac{|\beta_{0} + \beta^{T} x|}{||\beta||} = \frac{1}{||\beta||}.\f]
|
2014-11-27 20:39:05 +08:00
|
|
|
|
|
|
|
Recall that the margin introduced in the previous section, here denoted as \f$M\f$, is twice the
|
|
|
|
distance to the closest examples:
|
|
|
|
|
2014-11-28 00:54:13 +08:00
|
|
|
\f[M = \frac{2}{||\beta||}\f]
|
2014-11-27 20:39:05 +08:00
|
|
|
|
|
|
|
Finally, the problem of maximizing \f$M\f$ is equivalent to the problem of minimizing a function
|
|
|
|
\f$L(\beta)\f$ subject to some constraints. The constraints model the requirement for the hyperplane to
|
|
|
|
classify correctly all the training examples \f$x_{i}\f$. Formally,
|
|
|
|
|
2014-11-28 00:54:13 +08:00
|
|
|
\f[\min_{\beta, \beta_{0}} L(\beta) = \frac{1}{2}||\beta||^{2} \text{ subject to } y_{i}(\beta^{T} x_{i} + \beta_{0}) \geq 1 \text{ } \forall i,\f]
|
2014-11-27 20:39:05 +08:00
|
|
|
|
|
|
|
where \f$y_{i}\f$ represents each of the labels of the training examples.
|
|
|
|
|
|
|
|
This is a problem of Lagrangian optimization that can be solved using Lagrange multipliers to obtain
|
|
|
|
the weight vector \f$\beta\f$ and the bias \f$\beta_{0}\f$ of the optimal hyperplane.
|
|
|
|
|
|
|
|
Source Code
|
|
|
|
-----------
|
|
|
|
|
2018-07-13 21:11:00 +08:00
|
|
|
@add_toggle_cpp
|
|
|
|
- **Downloadable code**: Click
|
2021-12-22 21:01:26 +08:00
|
|
|
[here](https://github.com/opencv/opencv/tree/4.x/samples/cpp/tutorial_code/ml/introduction_to_svm/introduction_to_svm.cpp)
|
2018-07-13 21:11:00 +08:00
|
|
|
|
|
|
|
- **Code at glance:**
|
|
|
|
@include samples/cpp/tutorial_code/ml/introduction_to_svm/introduction_to_svm.cpp
|
|
|
|
@end_toggle
|
|
|
|
|
|
|
|
@add_toggle_java
|
|
|
|
- **Downloadable code**: Click
|
2021-12-22 21:01:26 +08:00
|
|
|
[here](https://github.com/opencv/opencv/tree/4.x/samples/java/tutorial_code/ml/introduction_to_svm/IntroductionToSVMDemo.java)
|
2018-07-13 21:11:00 +08:00
|
|
|
|
|
|
|
- **Code at glance:**
|
|
|
|
@include samples/java/tutorial_code/ml/introduction_to_svm/IntroductionToSVMDemo.java
|
|
|
|
@end_toggle
|
|
|
|
|
|
|
|
@add_toggle_python
|
|
|
|
- **Downloadable code**: Click
|
2021-12-22 21:01:26 +08:00
|
|
|
[here](https://github.com/opencv/opencv/tree/4.x/samples/python/tutorial_code/ml/introduction_to_svm/introduction_to_svm.py)
|
2018-07-13 21:11:00 +08:00
|
|
|
|
|
|
|
- **Code at glance:**
|
|
|
|
@include samples/python/tutorial_code/ml/introduction_to_svm/introduction_to_svm.py
|
|
|
|
@end_toggle
|
2014-11-27 20:39:05 +08:00
|
|
|
|
|
|
|
Explanation
|
|
|
|
-----------
|
|
|
|
|
2018-07-13 21:11:00 +08:00
|
|
|
- **Set up the training data**
|
|
|
|
|
|
|
|
The training data of this exercise is formed by a set of labeled 2D-points that belong to one of
|
|
|
|
two different classes; one of the classes consists of one point and the other of three points.
|
|
|
|
|
|
|
|
@add_toggle_cpp
|
|
|
|
@snippet samples/cpp/tutorial_code/ml/introduction_to_svm/introduction_to_svm.cpp setup1
|
|
|
|
@end_toggle
|
|
|
|
|
|
|
|
@add_toggle_java
|
|
|
|
@snippet samples/java/tutorial_code/ml/introduction_to_svm/IntroductionToSVMDemo.java setup1
|
|
|
|
@end_toggle
|
2014-11-27 20:39:05 +08:00
|
|
|
|
2018-07-13 21:11:00 +08:00
|
|
|
@add_toggle_python
|
|
|
|
@snippet samples/python/tutorial_code/ml/introduction_to_svm/introduction_to_svm.py setup1
|
|
|
|
@end_toggle
|
2015-02-11 18:24:14 +08:00
|
|
|
|
2018-07-13 21:11:00 +08:00
|
|
|
The function @ref cv::ml::SVM::train that will be used afterwards requires the training data to be
|
|
|
|
stored as @ref cv::Mat objects of floats. Therefore, we create these objects from the arrays
|
|
|
|
defined above:
|
2015-02-11 18:24:14 +08:00
|
|
|
|
2018-07-13 21:11:00 +08:00
|
|
|
@add_toggle_cpp
|
|
|
|
@snippet samples/cpp/tutorial_code/ml/introduction_to_svm/introduction_to_svm.cpp setup2
|
|
|
|
@end_toggle
|
2015-02-11 18:24:14 +08:00
|
|
|
|
2018-07-13 21:11:00 +08:00
|
|
|
@add_toggle_java
|
|
|
|
@snippet samples/java/tutorial_code/ml/introduction_to_svm/IntroductionToSVMDemo.java setup2
|
|
|
|
@end_toggle
|
2014-11-28 00:54:13 +08:00
|
|
|
|
2018-07-13 21:11:00 +08:00
|
|
|
@add_toggle_python
|
|
|
|
@snippet samples/python/tutorial_code/ml/introduction_to_svm/introduction_to_svm.py setup1
|
|
|
|
@end_toggle
|
|
|
|
|
|
|
|
- **Set up SVM's parameters**
|
2014-11-27 20:39:05 +08:00
|
|
|
|
|
|
|
In this tutorial we have introduced the theory of SVMs in the most simple case, when the
|
|
|
|
training examples are spread into two classes that are linearly separable. However, SVMs can be
|
|
|
|
used in a wide variety of problems (e.g. problems with non-linearly separable data, a SVM using
|
|
|
|
a kernel function to raise the dimensionality of the examples, etc). As a consequence of this,
|
|
|
|
we have to define some parameters before training the SVM. These parameters are stored in an
|
2015-02-11 18:24:14 +08:00
|
|
|
object of the class @ref cv::ml::SVM.
|
|
|
|
|
2018-07-13 21:11:00 +08:00
|
|
|
@add_toggle_cpp
|
|
|
|
@snippet samples/cpp/tutorial_code/ml/introduction_to_svm/introduction_to_svm.cpp init
|
|
|
|
@end_toggle
|
|
|
|
|
|
|
|
@add_toggle_java
|
|
|
|
@snippet samples/java/tutorial_code/ml/introduction_to_svm/IntroductionToSVMDemo.java init
|
|
|
|
@end_toggle
|
|
|
|
|
|
|
|
@add_toggle_python
|
|
|
|
@snippet samples/python/tutorial_code/ml/introduction_to_svm/introduction_to_svm.py init
|
|
|
|
@end_toggle
|
|
|
|
|
|
|
|
Here:
|
|
|
|
- *Type of SVM*. We choose here the type @ref cv::ml::SVM::C_SVC "C_SVC" that can be used for
|
|
|
|
n-class classification (n \f$\geq\f$ 2). The important feature of this type is that it deals
|
|
|
|
with imperfect separation of classes (i.e. when the training data is non-linearly separable).
|
|
|
|
This feature is not important here since the data is linearly separable and we chose this SVM
|
|
|
|
type only for being the most commonly used.
|
|
|
|
|
|
|
|
- *Type of SVM kernel*. We have not talked about kernel functions since they are not
|
|
|
|
interesting for the training data we are dealing with. Nevertheless, let's explain briefly now
|
|
|
|
the main idea behind a kernel function. It is a mapping done to the training data to improve
|
|
|
|
its resemblance to a linearly separable set of data. This mapping consists of increasing the
|
|
|
|
dimensionality of the data and is done efficiently using a kernel function. We choose here the
|
|
|
|
type @ref cv::ml::SVM::LINEAR "LINEAR" which means that no mapping is done. This parameter is
|
|
|
|
defined using cv::ml::SVM::setKernel.
|
|
|
|
|
|
|
|
- *Termination criteria of the algorithm*. The SVM training procedure is implemented solving a
|
|
|
|
constrained quadratic optimization problem in an **iterative** fashion. Here we specify a
|
|
|
|
maximum number of iterations and a tolerance error so we allow the algorithm to finish in
|
|
|
|
less number of steps even if the optimal hyperplane has not been computed yet. This
|
|
|
|
parameter is defined in a structure @ref cv::TermCriteria .
|
|
|
|
|
|
|
|
- **Train the SVM**
|
2015-02-11 18:24:14 +08:00
|
|
|
We call the method @ref cv::ml::SVM::train to build the SVM model.
|
2014-11-27 20:39:05 +08:00
|
|
|
|
2018-07-13 21:11:00 +08:00
|
|
|
@add_toggle_cpp
|
|
|
|
@snippet samples/cpp/tutorial_code/ml/introduction_to_svm/introduction_to_svm.cpp train
|
|
|
|
@end_toggle
|
|
|
|
|
|
|
|
@add_toggle_java
|
|
|
|
@snippet samples/java/tutorial_code/ml/introduction_to_svm/IntroductionToSVMDemo.java train
|
|
|
|
@end_toggle
|
2014-11-28 00:54:13 +08:00
|
|
|
|
2018-07-13 21:11:00 +08:00
|
|
|
@add_toggle_python
|
|
|
|
@snippet samples/python/tutorial_code/ml/introduction_to_svm/introduction_to_svm.py train
|
|
|
|
@end_toggle
|
|
|
|
|
|
|
|
- **Regions classified by the SVM**
|
2014-11-27 20:39:05 +08:00
|
|
|
|
2014-11-28 00:54:13 +08:00
|
|
|
The method @ref cv::ml::SVM::predict is used to classify an input sample using a trained SVM. In
|
|
|
|
this example we have used this method in order to color the space depending on the prediction done
|
|
|
|
by the SVM. In other words, an image is traversed interpreting its pixels as points of the
|
|
|
|
Cartesian plane. Each of the points is colored depending on the class predicted by the SVM; in
|
|
|
|
green if it is the class with label 1 and in blue if it is the class with label -1.
|
2015-02-11 18:24:14 +08:00
|
|
|
|
2018-07-13 21:11:00 +08:00
|
|
|
@add_toggle_cpp
|
|
|
|
@snippet samples/cpp/tutorial_code/ml/introduction_to_svm/introduction_to_svm.cpp show
|
|
|
|
@end_toggle
|
|
|
|
|
|
|
|
@add_toggle_java
|
|
|
|
@snippet samples/java/tutorial_code/ml/introduction_to_svm/IntroductionToSVMDemo.java show
|
|
|
|
@end_toggle
|
2014-11-27 20:39:05 +08:00
|
|
|
|
2018-07-13 21:11:00 +08:00
|
|
|
@add_toggle_python
|
|
|
|
@snippet samples/python/tutorial_code/ml/introduction_to_svm/introduction_to_svm.py show
|
|
|
|
@end_toggle
|
|
|
|
|
|
|
|
- **Support vectors**
|
2014-11-27 20:39:05 +08:00
|
|
|
|
2014-11-28 00:54:13 +08:00
|
|
|
We use here a couple of methods to obtain information about the support vectors.
|
|
|
|
The method @ref cv::ml::SVM::getSupportVectors obtain all of the support
|
|
|
|
vectors. We have used this methods here to find the training examples that are
|
2014-11-27 20:39:05 +08:00
|
|
|
support vectors and highlight them.
|
2015-02-11 18:24:14 +08:00
|
|
|
|
2018-07-13 21:11:00 +08:00
|
|
|
@add_toggle_cpp
|
|
|
|
@snippet samples/cpp/tutorial_code/ml/introduction_to_svm/introduction_to_svm.cpp show_vectors
|
|
|
|
@end_toggle
|
|
|
|
|
|
|
|
@add_toggle_java
|
|
|
|
@snippet samples/java/tutorial_code/ml/introduction_to_svm/IntroductionToSVMDemo.java show_vectors
|
|
|
|
@end_toggle
|
|
|
|
|
|
|
|
@add_toggle_python
|
|
|
|
@snippet samples/python/tutorial_code/ml/introduction_to_svm/introduction_to_svm.py show_vectors
|
|
|
|
@end_toggle
|
2014-11-28 00:54:13 +08:00
|
|
|
|
2014-11-27 20:39:05 +08:00
|
|
|
Results
|
|
|
|
-------
|
|
|
|
|
|
|
|
- The code opens an image and shows the training examples of both classes. The points of one class
|
|
|
|
are represented with white circles and black ones are used for the other class.
|
|
|
|
- The SVM is trained and used to classify all the pixels of the image. This results in a division
|
|
|
|
of the image in a blue region and a green region. The boundary between both regions is the
|
|
|
|
optimal separating hyperplane.
|
|
|
|
- Finally the support vectors are shown using gray rings around the training examples.
|
|
|
|
|
2014-11-28 21:21:28 +08:00
|
|
|
![](images/svm_intro_result.png)
|