Add Java and Python code for ML tutorials.

This commit is contained in:
catree 2018-07-13 15:11:00 +02:00
parent 4dc7e617a4
commit 41b95cae38
15 changed files with 1141 additions and 180 deletions

View File

@ -91,37 +91,112 @@ __Find the eigenvectors and eigenvalues of the covariance matrix__
Source Code
-----------
This tutorial code's is shown lines below. You can also download it from
[here](https://github.com/opencv/opencv/tree/3.4/samples/cpp/tutorial_code/ml/introduction_to_pca/introduction_to_pca.cpp).
@include cpp/tutorial_code/ml/introduction_to_pca/introduction_to_pca.cpp
@add_toggle_cpp
- **Downloadable code**: Click
[here](https://github.com/opencv/opencv/tree/3.4/samples/cpp/tutorial_code/ml/introduction_to_pca/introduction_to_pca.cpp)
- **Code at glance:**
@include samples/cpp/tutorial_code/ml/introduction_to_pca/introduction_to_pca.cpp
@end_toggle
@add_toggle_java
- **Downloadable code**: Click
[here](https://github.com/opencv/opencv/tree/3.4/samples/java/tutorial_code/ml/introduction_to_pca/IntroductionToPCADemo.java)
- **Code at glance:**
@include samples/java/tutorial_code/ml/introduction_to_pca/IntroductionToPCADemo.java
@end_toggle
@add_toggle_python
- **Downloadable code**: Click
[here](https://github.com/opencv/opencv/tree/3.4/samples/python/tutorial_code/ml/introduction_to_pca/introduction_to_pca.py)
- **Code at glance:**
@include samples/python/tutorial_code/ml/introduction_to_pca/introduction_to_pca.py
@end_toggle
@note Another example using PCA for dimensionality reduction while maintaining an amount of variance can be found at [opencv_source_code/samples/cpp/pca.cpp](https://github.com/opencv/opencv/tree/3.4/samples/cpp/pca.cpp)
Explanation
-----------
-# __Read image and convert it to binary__
- __Read image and convert it to binary__
Here we apply the necessary pre-processing procedures in order to be able to detect the objects of interest.
@snippet samples/cpp/tutorial_code/ml/introduction_to_pca/introduction_to_pca.cpp pre-process
Here we apply the necessary pre-processing procedures in order to be able to detect the objects of interest.
-# __Extract objects of interest__
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ml/introduction_to_pca/introduction_to_pca.cpp pre-process
@end_toggle
Then find and filter contours by size and obtain the orientation of the remaining ones.
@snippet samples/cpp/tutorial_code/ml/introduction_to_pca/introduction_to_pca.cpp contours
@add_toggle_java
@snippet samples/java/tutorial_code/ml/introduction_to_pca/IntroductionToPCADemo.java pre-process
@end_toggle
-# __Extract orientation__
@add_toggle_python
@snippet samples/python/tutorial_code/ml/introduction_to_pca/introduction_to_pca.py pre-process
@end_toggle
Orientation is extracted by the call of getOrientation() function, which performs all the PCA procedure.
@snippet samples/cpp/tutorial_code/ml/introduction_to_pca/introduction_to_pca.cpp pca
- __Extract objects of interest__
First the data need to be arranged in a matrix with size n x 2, where n is the number of data points we have. Then we can perform that PCA analysis. The calculated mean (i.e. center of mass) is stored in the _cntr_ variable and the eigenvectors and eigenvalues are stored in the corresponding std::vectors.
Then find and filter contours by size and obtain the orientation of the remaining ones.
-# __Visualize result__
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ml/introduction_to_pca/introduction_to_pca.cpp contours
@end_toggle
The final result is visualized through the drawAxis() function, where the principal components are drawn in lines, and each eigenvector is multiplied by its eigenvalue and translated to the mean position.
@snippet samples/cpp/tutorial_code/ml/introduction_to_pca/introduction_to_pca.cpp visualization
@snippet samples/cpp/tutorial_code/ml/introduction_to_pca/introduction_to_pca.cpp visualization1
@add_toggle_java
@snippet samples/java/tutorial_code/ml/introduction_to_pca/IntroductionToPCADemo.java contours
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/ml/introduction_to_pca/introduction_to_pca.py contours
@end_toggle
- __Extract orientation__
Orientation is extracted by the call of getOrientation() function, which performs all the PCA procedure.
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ml/introduction_to_pca/introduction_to_pca.cpp pca
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/ml/introduction_to_pca/IntroductionToPCADemo.java pca
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/ml/introduction_to_pca/introduction_to_pca.py pca
@end_toggle
First the data need to be arranged in a matrix with size n x 2, where n is the number of data points we have. Then we can perform that PCA analysis. The calculated mean (i.e. center of mass) is stored in the _cntr_ variable and the eigenvectors and eigenvalues are stored in the corresponding std::vectors.
- __Visualize result__
The final result is visualized through the drawAxis() function, where the principal components are drawn in lines, and each eigenvector is multiplied by its eigenvalue and translated to the mean position.
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ml/introduction_to_pca/introduction_to_pca.cpp visualization
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/ml/introduction_to_pca/IntroductionToPCADemo.java visualization
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/ml/introduction_to_pca/introduction_to_pca.py visualization
@end_toggle
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ml/introduction_to_pca/introduction_to_pca.cpp visualization1
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/ml/introduction_to_pca/IntroductionToPCADemo.java visualization1
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/ml/introduction_to_pca/introduction_to_pca.py visualization1
@end_toggle
Results
-------

View File

@ -96,25 +96,67 @@ Source Code
@note The following code has been implemented with OpenCV 3.0 classes and functions. An equivalent version of the code using OpenCV 2.4 can be found in [this page.](http://docs.opencv.org/2.4/doc/tutorials/ml/introduction_to_svm/introduction_to_svm.html#introductiontosvms)
@include cpp/tutorial_code/ml/introduction_to_svm/introduction_to_svm.cpp
@add_toggle_cpp
- **Downloadable code**: Click
[here](https://github.com/opencv/opencv/tree/3.4/samples/cpp/tutorial_code/ml/introduction_to_svm/introduction_to_svm.cpp)
- **Code at glance:**
@include samples/cpp/tutorial_code/ml/introduction_to_svm/introduction_to_svm.cpp
@end_toggle
@add_toggle_java
- **Downloadable code**: Click
[here](https://github.com/opencv/opencv/tree/3.4/samples/java/tutorial_code/ml/introduction_to_svm/IntroductionToSVMDemo.java)
- **Code at glance:**
@include samples/java/tutorial_code/ml/introduction_to_svm/IntroductionToSVMDemo.java
@end_toggle
@add_toggle_python
- **Downloadable code**: Click
[here](https://github.com/opencv/opencv/tree/3.4/samples/python/tutorial_code/ml/introduction_to_svm/introduction_to_svm.py)
- **Code at glance:**
@include samples/python/tutorial_code/ml/introduction_to_svm/introduction_to_svm.py
@end_toggle
Explanation
-----------
-# **Set up the training data**
- **Set up the training data**
The training data of this exercise is formed by a set of labeled 2D-points that belong to one of
two different classes; one of the classes consists of one point and the other of three points.
The training data of this exercise is formed by a set of labeled 2D-points that belong to one of
two different classes; one of the classes consists of one point and the other of three points.
@snippet cpp/tutorial_code/ml/introduction_to_svm/introduction_to_svm.cpp setup1
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ml/introduction_to_svm/introduction_to_svm.cpp setup1
@end_toggle
The function @ref cv::ml::SVM::train that will be used afterwards requires the training data to be
stored as @ref cv::Mat objects of floats. Therefore, we create these objects from the arrays
defined above:
@add_toggle_java
@snippet samples/java/tutorial_code/ml/introduction_to_svm/IntroductionToSVMDemo.java setup1
@end_toggle
@snippet cpp/tutorial_code/ml/introduction_to_svm/introduction_to_svm.cpp setup2
@add_toggle_python
@snippet samples/python/tutorial_code/ml/introduction_to_svm/introduction_to_svm.py setup1
@end_toggle
-# **Set up SVM's parameters**
The function @ref cv::ml::SVM::train that will be used afterwards requires the training data to be
stored as @ref cv::Mat objects of floats. Therefore, we create these objects from the arrays
defined above:
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ml/introduction_to_svm/introduction_to_svm.cpp setup2
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/ml/introduction_to_svm/IntroductionToSVMDemo.java setup2
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/ml/introduction_to_svm/introduction_to_svm.py setup1
@end_toggle
- **Set up SVM's parameters**
In this tutorial we have introduced the theory of SVMs in the most simple case, when the
training examples are spread into two classes that are linearly separable. However, SVMs can be
@ -123,35 +165,55 @@ Explanation
we have to define some parameters before training the SVM. These parameters are stored in an
object of the class @ref cv::ml::SVM.
@snippet cpp/tutorial_code/ml/introduction_to_svm/introduction_to_svm.cpp init
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ml/introduction_to_svm/introduction_to_svm.cpp init
@end_toggle
Here:
- *Type of SVM*. We choose here the type @ref cv::ml::SVM::C_SVC "C_SVC" that can be used for
n-class classification (n \f$\geq\f$ 2). The important feature of this type is that it deals
with imperfect separation of classes (i.e. when the training data is non-linearly separable).
This feature is not important here since the data is linearly separable and we chose this SVM
type only for being the most commonly used.
@add_toggle_java
@snippet samples/java/tutorial_code/ml/introduction_to_svm/IntroductionToSVMDemo.java init
@end_toggle
- *Type of SVM kernel*. We have not talked about kernel functions since they are not
interesting for the training data we are dealing with. Nevertheless, let's explain briefly now
the main idea behind a kernel function. It is a mapping done to the training data to improve
its resemblance to a linearly separable set of data. This mapping consists of increasing the
dimensionality of the data and is done efficiently using a kernel function. We choose here the
type @ref cv::ml::SVM::LINEAR "LINEAR" which means that no mapping is done. This parameter is
defined using cv::ml::SVM::setKernel.
@add_toggle_python
@snippet samples/python/tutorial_code/ml/introduction_to_svm/introduction_to_svm.py init
@end_toggle
- *Termination criteria of the algorithm*. The SVM training procedure is implemented solving a
constrained quadratic optimization problem in an **iterative** fashion. Here we specify a
maximum number of iterations and a tolerance error so we allow the algorithm to finish in
less number of steps even if the optimal hyperplane has not been computed yet. This
parameter is defined in a structure @ref cv::TermCriteria .
Here:
- *Type of SVM*. We choose here the type @ref cv::ml::SVM::C_SVC "C_SVC" that can be used for
n-class classification (n \f$\geq\f$ 2). The important feature of this type is that it deals
with imperfect separation of classes (i.e. when the training data is non-linearly separable).
This feature is not important here since the data is linearly separable and we chose this SVM
type only for being the most commonly used.
-# **Train the SVM**
- *Type of SVM kernel*. We have not talked about kernel functions since they are not
interesting for the training data we are dealing with. Nevertheless, let's explain briefly now
the main idea behind a kernel function. It is a mapping done to the training data to improve
its resemblance to a linearly separable set of data. This mapping consists of increasing the
dimensionality of the data and is done efficiently using a kernel function. We choose here the
type @ref cv::ml::SVM::LINEAR "LINEAR" which means that no mapping is done. This parameter is
defined using cv::ml::SVM::setKernel.
- *Termination criteria of the algorithm*. The SVM training procedure is implemented solving a
constrained quadratic optimization problem in an **iterative** fashion. Here we specify a
maximum number of iterations and a tolerance error so we allow the algorithm to finish in
less number of steps even if the optimal hyperplane has not been computed yet. This
parameter is defined in a structure @ref cv::TermCriteria .
- **Train the SVM**
We call the method @ref cv::ml::SVM::train to build the SVM model.
@snippet cpp/tutorial_code/ml/introduction_to_svm/introduction_to_svm.cpp train
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ml/introduction_to_svm/introduction_to_svm.cpp train
@end_toggle
-# **Regions classified by the SVM**
@add_toggle_java
@snippet samples/java/tutorial_code/ml/introduction_to_svm/IntroductionToSVMDemo.java train
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/ml/introduction_to_svm/introduction_to_svm.py train
@end_toggle
- **Regions classified by the SVM**
The method @ref cv::ml::SVM::predict is used to classify an input sample using a trained SVM. In
this example we have used this method in order to color the space depending on the prediction done
@ -159,16 +221,36 @@ Explanation
Cartesian plane. Each of the points is colored depending on the class predicted by the SVM; in
green if it is the class with label 1 and in blue if it is the class with label -1.
@snippet cpp/tutorial_code/ml/introduction_to_svm/introduction_to_svm.cpp show
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ml/introduction_to_svm/introduction_to_svm.cpp show
@end_toggle
-# **Support vectors**
@add_toggle_java
@snippet samples/java/tutorial_code/ml/introduction_to_svm/IntroductionToSVMDemo.java show
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/ml/introduction_to_svm/introduction_to_svm.py show
@end_toggle
- **Support vectors**
We use here a couple of methods to obtain information about the support vectors.
The method @ref cv::ml::SVM::getSupportVectors obtain all of the support
vectors. We have used this methods here to find the training examples that are
support vectors and highlight them.
@snippet cpp/tutorial_code/ml/introduction_to_svm/introduction_to_svm.cpp show_vectors
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ml/introduction_to_svm/introduction_to_svm.cpp show_vectors
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/ml/introduction_to_svm/IntroductionToSVMDemo.java show_vectors
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/ml/introduction_to_svm/introduction_to_svm.py show_vectors
@end_toggle
Results
-------

View File

@ -92,81 +92,175 @@ You may also find the source code in `samples/cpp/tutorial_code/ml/non_linear_sv
@note The following code has been implemented with OpenCV 3.0 classes and functions. An equivalent version of the code
using OpenCV 2.4 can be found in [this page.](http://docs.opencv.org/2.4/doc/tutorials/ml/non_linear_svms/non_linear_svms.html#nonlinearsvms)
@include cpp/tutorial_code/ml/non_linear_svms/non_linear_svms.cpp
@add_toggle_cpp
- **Downloadable code**: Click
[here](https://github.com/opencv/opencv/tree/3.4/samples/cpp/tutorial_code/ml/non_linear_svms/non_linear_svms.cpp)
- **Code at glance:**
@include samples/cpp/tutorial_code/ml/non_linear_svms/non_linear_svms.cpp
@end_toggle
@add_toggle_java
- **Downloadable code**: Click
[here](https://github.com/opencv/opencv/tree/3.4/samples/java/tutorial_code/ml/non_linear_svms/NonLinearSVMsDemo.java)
- **Code at glance:**
@include samples/java/tutorial_code/ml/non_linear_svms/NonLinearSVMsDemo.java
@end_toggle
@add_toggle_python
- **Downloadable code**: Click
[here](https://github.com/opencv/opencv/tree/3.4/samples/python/tutorial_code/ml/non_linear_svms/non_linear_svms.py)
- **Code at glance:**
@include samples/python/tutorial_code/ml/non_linear_svms/non_linear_svms.py
@end_toggle
Explanation
-----------
-# __Set up the training data__
- __Set up the training data__
The training data of this exercise is formed by a set of labeled 2D-points that belong to one of
two different classes. To make the exercise more appealing, the training data is generated
randomly using a uniform probability density functions (PDFs).
The training data of this exercise is formed by a set of labeled 2D-points that belong to one of
two different classes. To make the exercise more appealing, the training data is generated
randomly using a uniform probability density functions (PDFs).
We have divided the generation of the training data into two main parts.
We have divided the generation of the training data into two main parts.
In the first part we generate data for both classes that is linearly separable.
@snippet cpp/tutorial_code/ml/non_linear_svms/non_linear_svms.cpp setup1
In the first part we generate data for both classes that is linearly separable.
In the second part we create data for both classes that is non-linearly separable, data that
overlaps.
@snippet cpp/tutorial_code/ml/non_linear_svms/non_linear_svms.cpp setup2
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ml/non_linear_svms/non_linear_svms.cpp setup1
@end_toggle
-# __Set up SVM's parameters__
@add_toggle_java
@snippet samples/java/tutorial_code/ml/non_linear_svms/NonLinearSVMsDemo.java setup1
@end_toggle
@note In the previous tutorial @ref tutorial_introduction_to_svm there is an explanation of the
attributes of the class @ref cv::ml::SVM that we configure here before training the SVM.
@add_toggle_python
@snippet samples/python/tutorial_code/ml/non_linear_svms/non_linear_svms.py setup1
@end_toggle
@snippet cpp/tutorial_code/ml/non_linear_svms/non_linear_svms.cpp init
In the second part we create data for both classes that is non-linearly separable, data that
overlaps.
There are just two differences between the configuration we do here and the one that was done in
the previous tutorial (@ref tutorial_introduction_to_svm) that we use as reference.
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ml/non_linear_svms/non_linear_svms.cpp setup2
@end_toggle
- _C_. We chose here a small value of this parameter in order not to punish too much the
misclassification errors in the optimization. The idea of doing this stems from the will of
obtaining a solution close to the one intuitively expected. However, we recommend to get a
better insight of the problem by making adjustments to this parameter.
@add_toggle_java
@snippet samples/java/tutorial_code/ml/non_linear_svms/NonLinearSVMsDemo.java setup2
@end_toggle
@note In this case there are just very few points in the overlapping region between classes.
By giving a smaller value to __FRAC_LINEAR_SEP__ the density of points can be incremented and the
impact of the parameter _C_ explored deeply.
@add_toggle_python
@snippet samples/python/tutorial_code/ml/non_linear_svms/non_linear_svms.py setup2
@end_toggle
- _Termination Criteria of the algorithm_. The maximum number of iterations has to be
increased considerably in order to solve correctly a problem with non-linearly separable
training data. In particular, we have increased in five orders of magnitude this value.
- __Set up SVM's parameters__
-# __Train the SVM__
@note In the previous tutorial @ref tutorial_introduction_to_svm there is an explanation of the
attributes of the class @ref cv::ml::SVM that we configure here before training the SVM.
We call the method @ref cv::ml::SVM::train to build the SVM model. Watch out that the training
process may take a quite long time. Have patiance when your run the program.
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ml/non_linear_svms/non_linear_svms.cpp init
@end_toggle
@snippet cpp/tutorial_code/ml/non_linear_svms/non_linear_svms.cpp train
@add_toggle_java
@snippet samples/java/tutorial_code/ml/non_linear_svms/NonLinearSVMsDemo.java init
@end_toggle
-# __Show the Decision Regions__
@add_toggle_python
@snippet samples/python/tutorial_code/ml/non_linear_svms/non_linear_svms.py init
@end_toggle
The method @ref cv::ml::SVM::predict is used to classify an input sample using a trained SVM. In
this example we have used this method in order to color the space depending on the prediction done
by the SVM. In other words, an image is traversed interpreting its pixels as points of the
Cartesian plane. Each of the points is colored depending on the class predicted by the SVM; in
dark green if it is the class with label 1 and in dark blue if it is the class with label 2.
There are just two differences between the configuration we do here and the one that was done in
the previous tutorial (@ref tutorial_introduction_to_svm) that we use as reference.
@snippet cpp/tutorial_code/ml/non_linear_svms/non_linear_svms.cpp show
- _C_. We chose here a small value of this parameter in order not to punish too much the
misclassification errors in the optimization. The idea of doing this stems from the will of
obtaining a solution close to the one intuitively expected. However, we recommend to get a
better insight of the problem by making adjustments to this parameter.
-# __Show the training data__
@note In this case there are just very few points in the overlapping region between classes.
By giving a smaller value to __FRAC_LINEAR_SEP__ the density of points can be incremented and the
impact of the parameter _C_ explored deeply.
The method @ref cv::circle is used to show the samples that compose the training data. The samples
of the class labeled with 1 are shown in light green and in light blue the samples of the class
labeled with 2.
- _Termination Criteria of the algorithm_. The maximum number of iterations has to be
increased considerably in order to solve correctly a problem with non-linearly separable
training data. In particular, we have increased in five orders of magnitude this value.
@snippet cpp/tutorial_code/ml/non_linear_svms/non_linear_svms.cpp show_data
- __Train the SVM__
-# __Support vectors__
We call the method @ref cv::ml::SVM::train to build the SVM model. Watch out that the training
process may take a quite long time. Have patiance when your run the program.
We use here a couple of methods to obtain information about the support vectors. The method
@ref cv::ml::SVM::getSupportVectors obtain all support vectors. We have used this methods here
to find the training examples that are support vectors and highlight them.
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ml/non_linear_svms/non_linear_svms.cpp train
@end_toggle
@snippet cpp/tutorial_code/ml/non_linear_svms/non_linear_svms.cpp show_vectors
@add_toggle_java
@snippet samples/java/tutorial_code/ml/non_linear_svms/NonLinearSVMsDemo.java train
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/ml/non_linear_svms/non_linear_svms.py train
@end_toggle
- __Show the Decision Regions__
The method @ref cv::ml::SVM::predict is used to classify an input sample using a trained SVM. In
this example we have used this method in order to color the space depending on the prediction done
by the SVM. In other words, an image is traversed interpreting its pixels as points of the
Cartesian plane. Each of the points is colored depending on the class predicted by the SVM; in
dark green if it is the class with label 1 and in dark blue if it is the class with label 2.
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ml/non_linear_svms/non_linear_svms.cpp show
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/ml/non_linear_svms/NonLinearSVMsDemo.java show
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/ml/non_linear_svms/non_linear_svms.py show
@end_toggle
- __Show the training data__
The method @ref cv::circle is used to show the samples that compose the training data. The samples
of the class labeled with 1 are shown in light green and in light blue the samples of the class
labeled with 2.
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ml/non_linear_svms/non_linear_svms.cpp show_data
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/ml/non_linear_svms/NonLinearSVMsDemo.java show_data
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/ml/non_linear_svms/non_linear_svms.py show_data
@end_toggle
- __Support vectors__
We use here a couple of methods to obtain information about the support vectors. The method
@ref cv::ml::SVM::getSupportVectors obtain all support vectors. We have used this methods here
to find the training examples that are support vectors and highlight them.
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ml/non_linear_svms/non_linear_svms.cpp show_vectors
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/ml/non_linear_svms/NonLinearSVMsDemo.java show_vectors
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/ml/non_linear_svms/non_linear_svms.py show_vectors
@end_toggle
Results
-------

View File

@ -6,6 +6,8 @@ of data.
- @subpage tutorial_introduction_to_svm
*Languages:* C++, Java, Python
*Compatibility:* \> OpenCV 2.0
*Author:* Fernando Iglesias García
@ -14,6 +16,8 @@ of data.
- @subpage tutorial_non_linear_svms
*Languages:* C++, Java, Python
*Compatibility:* \> OpenCV 2.0
*Author:* Fernando Iglesias García
@ -23,6 +27,8 @@ of data.
- @subpage tutorial_introduction_to_pca
*Languages:* C++, Java, Python
*Compatibility:* \> OpenCV 2.0
*Author:* Theodore Tsesmelis

View File

@ -21,13 +21,9 @@ double getOrientation(const vector<Point> &, Mat&);
*/
void drawAxis(Mat& img, Point p, Point q, Scalar colour, const float scale = 0.2)
{
//! [visualization1]
double angle;
double hypotenuse;
angle = atan2( (double) p.y - q.y, (double) p.x - q.x ); // angle in radians
hypotenuse = sqrt( (double) (p.y - q.y) * (p.y - q.y) + (p.x - q.x) * (p.x - q.x));
// double degrees = angle * 180 / CV_PI; // convert radians to degrees (0-180 range)
// cout << "Degrees: " << abs(degrees - 180) << endl; // angle in 0-360 degrees range
//! [visualization1]
double angle = atan2( (double) p.y - q.y, (double) p.x - q.x ); // angle in radians
double hypotenuse = sqrt( (double) (p.y - q.y) * (p.y - q.y) + (p.x - q.x) * (p.x - q.x));
// Here we lengthen the arrow by a factor of scale
q.x = (int) (p.x - scale * hypotenuse * cos(angle));
@ -42,7 +38,7 @@ void drawAxis(Mat& img, Point p, Point q, Scalar colour, const float scale = 0.2
p.x = (int) (q.x + 9 * cos(angle - CV_PI / 4));
p.y = (int) (q.y + 9 * sin(angle - CV_PI / 4));
line(img, p, q, colour, 1, LINE_AA);
//! [visualization1]
//! [visualization1]
}
/**
@ -50,11 +46,11 @@ void drawAxis(Mat& img, Point p, Point q, Scalar colour, const float scale = 0.2
*/
double getOrientation(const vector<Point> &pts, Mat &img)
{
//! [pca]
//! [pca]
//Construct a buffer used by the pca analysis
int sz = static_cast<int>(pts.size());
Mat data_pts = Mat(sz, 2, CV_64FC1);
for (int i = 0; i < data_pts.rows; ++i)
Mat data_pts = Mat(sz, 2, CV_64F);
for (int i = 0; i < data_pts.rows; i++)
{
data_pts.at<double>(i, 0) = pts[i].x;
data_pts.at<double>(i, 1) = pts[i].y;
@ -70,16 +66,16 @@ double getOrientation(const vector<Point> &pts, Mat &img)
//Store the eigenvalues and eigenvectors
vector<Point2d> eigen_vecs(2);
vector<double> eigen_val(2);
for (int i = 0; i < 2; ++i)
for (int i = 0; i < 2; i++)
{
eigen_vecs[i] = Point2d(pca_analysis.eigenvectors.at<double>(i, 0),
pca_analysis.eigenvectors.at<double>(i, 1));
eigen_val[i] = pca_analysis.eigenvalues.at<double>(i);
}
//! [pca]
//! [pca]
//! [visualization]
//! [visualization]
// Draw the principal components
circle(img, cntr, 3, Scalar(255, 0, 255), 2);
Point p1 = cntr + 0.02 * Point(static_cast<int>(eigen_vecs[0].x * eigen_val[0]), static_cast<int>(eigen_vecs[0].y * eigen_val[0]));
@ -88,7 +84,7 @@ double getOrientation(const vector<Point> &pts, Mat &img)
drawAxis(img, cntr, p2, Scalar(255, 255, 0), 5);
double angle = atan2(eigen_vecs[0].y, eigen_vecs[0].x); // orientation in radians
//! [visualization]
//! [visualization]
return angle;
}
@ -98,10 +94,10 @@ double getOrientation(const vector<Point> &pts, Mat &img)
*/
int main(int argc, char** argv)
{
//! [pre-process]
//! [pre-process]
// Load image
CommandLineParser parser(argc, argv, "{@input | ../data/pca_test1.jpg | input image}");
parser.about( "This program demonstrates how to use OpenCV PCA to extract the orienation of an object.\n" );
parser.about( "This program demonstrates how to use OpenCV PCA to extract the orientation of an object.\n" );
parser.printMessage();
Mat src = imread(parser.get<String>("@input"));
@ -122,14 +118,14 @@ int main(int argc, char** argv)
// Convert image to binary
Mat bw;
threshold(gray, bw, 50, 255, THRESH_BINARY | THRESH_OTSU);
//! [pre-process]
//! [pre-process]
//! [contours]
//! [contours]
// Find all the contours in the thresholded image
vector<vector<Point> > contours;
findContours(bw, contours, RETR_LIST, CHAIN_APPROX_NONE);
for (size_t i = 0; i < contours.size(); ++i)
for (size_t i = 0; i < contours.size(); i++)
{
// Calculate the area of each contour
double area = contourArea(contours[i]);
@ -137,14 +133,14 @@ int main(int argc, char** argv)
if (area < 1e2 || 1e5 < area) continue;
// Draw each contour only for visualisation purposes
drawContours(src, contours, static_cast<int>(i), Scalar(0, 0, 255), 2, LINE_8);
drawContours(src, contours, static_cast<int>(i), Scalar(0, 0, 255), 2);
// Find the orientation of each shape
getOrientation(contours[i], src);
}
//! [contours]
//! [contours]
imshow("output", src);
waitKey(0);
waitKey();
return 0;
}

View File

@ -1,6 +1,6 @@
#include <opencv2/core.hpp>
#include <opencv2/imgproc.hpp>
#include "opencv2/imgcodecs.hpp"
#include <opencv2/imgcodecs.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/ml.hpp>
@ -9,21 +9,16 @@ using namespace cv::ml;
int main(int, char**)
{
// Data for visual representation
int width = 512, height = 512;
Mat image = Mat::zeros(height, width, CV_8UC3);
// Set up training data
//! [setup1]
int labels[4] = {1, -1, -1, -1};
float trainingData[4][2] = { {501, 10}, {255, 10}, {501, 255}, {10, 501} };
//! [setup1]
//! [setup2]
Mat trainingDataMat(4, 2, CV_32FC1, trainingData);
Mat trainingDataMat(4, 2, CV_32F, trainingData);
Mat labelsMat(4, 1, CV_32SC1, labels);
//! [setup2]
// Train the SVM
//! [init]
Ptr<SVM> svm = SVM::create();
@ -35,11 +30,16 @@ int main(int, char**)
svm->train(trainingDataMat, ROW_SAMPLE, labelsMat);
//! [train]
// Data for visual representation
int width = 512, height = 512;
Mat image = Mat::zeros(height, width, CV_8UC3);
// Show the decision regions given by the SVM
//! [show]
Vec3b green(0,255,0), blue (255,0,0);
for (int i = 0; i < image.rows; ++i)
for (int j = 0; j < image.cols; ++j)
Vec3b green(0,255,0), blue(255,0,0);
for (int i = 0; i < image.rows; i++)
{
for (int j = 0; j < image.cols; j++)
{
Mat sampleMat = (Mat_<float>(1,2) << j,i);
float response = svm->predict(sampleMat);
@ -49,34 +49,33 @@ int main(int, char**)
else if (response == -1)
image.at<Vec3b>(i,j) = blue;
}
}
//! [show]
// Show the training data
//! [show_data]
int thickness = -1;
int lineType = 8;
circle( image, Point(501, 10), 5, Scalar( 0, 0, 0), thickness, lineType );
circle( image, Point(255, 10), 5, Scalar(255, 255, 255), thickness, lineType );
circle( image, Point(501, 255), 5, Scalar(255, 255, 255), thickness, lineType );
circle( image, Point( 10, 501), 5, Scalar(255, 255, 255), thickness, lineType );
circle( image, Point(501, 10), 5, Scalar( 0, 0, 0), thickness );
circle( image, Point(255, 10), 5, Scalar(255, 255, 255), thickness );
circle( image, Point(501, 255), 5, Scalar(255, 255, 255), thickness );
circle( image, Point( 10, 501), 5, Scalar(255, 255, 255), thickness );
//! [show_data]
// Show support vectors
//! [show_vectors]
thickness = 2;
lineType = 8;
Mat sv = svm->getUncompressedSupportVectors();
for (int i = 0; i < sv.rows; ++i)
for (int i = 0; i < sv.rows; i++)
{
const float* v = sv.ptr<float>(i);
circle( image, Point( (int) v[0], (int) v[1]), 6, Scalar(128, 128, 128), thickness, lineType);
circle(image, Point( (int) v[0], (int) v[1]), 6, Scalar(128, 128, 128), thickness);
}
//! [show_vectors]
imwrite("result.png", image); // save the image
imshow("SVM Simple Example", image); // show it to the user
waitKey(0);
waitKey();
return 0;
}

View File

@ -5,9 +5,6 @@
#include <opencv2/highgui.hpp>
#include <opencv2/ml.hpp>
#define NTRAINING_SAMPLES 100 // Number of training samples per class
#define FRAC_LINEAR_SEP 0.9f // Fraction of samples which compose the linear separable part
using namespace cv;
using namespace cv::ml;
using namespace std;
@ -16,8 +13,6 @@ static void help()
{
cout<< "\n--------------------------------------------------------------------------" << endl
<< "This program shows Support Vector Machines for Non-Linearly Separable Data. " << endl
<< "Usage:" << endl
<< "./non_linear_svms" << endl
<< "--------------------------------------------------------------------------" << endl
<< endl;
}
@ -26,13 +21,16 @@ int main()
{
help();
const int NTRAINING_SAMPLES = 100; // Number of training samples per class
const float FRAC_LINEAR_SEP = 0.9f; // Fraction of samples which compose the linear separable part
// Data for visual representation
const int WIDTH = 512, HEIGHT = 512;
Mat I = Mat::zeros(HEIGHT, WIDTH, CV_8UC3);
//--------------------- 1. Set up training data randomly ---------------------------------------
Mat trainData(2*NTRAINING_SAMPLES, 2, CV_32FC1);
Mat labels (2*NTRAINING_SAMPLES, 1, CV_32SC1);
Mat trainData(2*NTRAINING_SAMPLES, 2, CV_32F);
Mat labels (2*NTRAINING_SAMPLES, 1, CV_32S);
RNG rng(100); // Random value generation class
@ -44,10 +42,10 @@ int main()
Mat trainClass = trainData.rowRange(0, nLinearSamples);
// The x coordinate of the points is in [0, 0.4)
Mat c = trainClass.colRange(0, 1);
rng.fill(c, RNG::UNIFORM, Scalar(1), Scalar(0.4 * WIDTH));
rng.fill(c, RNG::UNIFORM, Scalar(0), Scalar(0.4 * WIDTH));
// The y coordinate of the points is in [0, 1)
c = trainClass.colRange(1,2);
rng.fill(c, RNG::UNIFORM, Scalar(1), Scalar(HEIGHT));
rng.fill(c, RNG::UNIFORM, Scalar(0), Scalar(HEIGHT));
// Generate random points for the class 2
trainClass = trainData.rowRange(2*NTRAINING_SAMPLES-nLinearSamples, 2*NTRAINING_SAMPLES);
@ -56,26 +54,26 @@ int main()
rng.fill(c, RNG::UNIFORM, Scalar(0.6*WIDTH), Scalar(WIDTH));
// The y coordinate of the points is in [0, 1)
c = trainClass.colRange(1,2);
rng.fill(c, RNG::UNIFORM, Scalar(1), Scalar(HEIGHT));
rng.fill(c, RNG::UNIFORM, Scalar(0), Scalar(HEIGHT));
//! [setup1]
//------------------ Set up the non-linearly separable part of the training data ---------------
//! [setup2]
// Generate random points for the classes 1 and 2
trainClass = trainData.rowRange( nLinearSamples, 2*NTRAINING_SAMPLES-nLinearSamples);
trainClass = trainData.rowRange(nLinearSamples, 2*NTRAINING_SAMPLES-nLinearSamples);
// The x coordinate of the points is in [0.4, 0.6)
c = trainClass.colRange(0,1);
rng.fill(c, RNG::UNIFORM, Scalar(0.4*WIDTH), Scalar(0.6*WIDTH));
// The y coordinate of the points is in [0, 1)
c = trainClass.colRange(1,2);
rng.fill(c, RNG::UNIFORM, Scalar(1), Scalar(HEIGHT));
rng.fill(c, RNG::UNIFORM, Scalar(0), Scalar(HEIGHT));
//! [setup2]
//------------------------- Set up the labels for the classes ---------------------------------
labels.rowRange( 0, NTRAINING_SAMPLES).setTo(1); // Class 1
labels.rowRange(NTRAINING_SAMPLES, 2*NTRAINING_SAMPLES).setTo(2); // Class 2
//------------------------ 2. Set up the support vector machines parameters --------------------
//------------------------ 3. Train the svm ----------------------------------------------------
cout << "Starting training process" << endl;
//! [init]
Ptr<SVM> svm = SVM::create();
@ -84,6 +82,8 @@ int main()
svm->setKernel(SVM::LINEAR);
svm->setTermCriteria(TermCriteria(TermCriteria::MAX_ITER, (int)1e7, 1e-6));
//! [init]
//------------------------ 3. Train the svm ----------------------------------------------------
//! [train]
svm->train(trainData, ROW_SAMPLE, labels);
//! [train]
@ -91,53 +91,54 @@ int main()
//------------------------ 4. Show the decision regions ----------------------------------------
//! [show]
Vec3b green(0,100,0), blue (100,0,0);
for (int i = 0; i < I.rows; ++i)
for (int j = 0; j < I.cols; ++j)
Vec3b green(0,100,0), blue(100,0,0);
for (int i = 0; i < I.rows; i++)
{
for (int j = 0; j < I.cols; j++)
{
Mat sampleMat = (Mat_<float>(1,2) << i, j);
Mat sampleMat = (Mat_<float>(1,2) << j, i);
float response = svm->predict(sampleMat);
if (response == 1) I.at<Vec3b>(j, i) = green;
else if (response == 2) I.at<Vec3b>(j, i) = blue;
if (response == 1) I.at<Vec3b>(i,j) = green;
else if (response == 2) I.at<Vec3b>(i,j) = blue;
}
}
//! [show]
//----------------------- 5. Show the training data --------------------------------------------
//! [show_data]
int thick = -1;
int lineType = 8;
float px, py;
// Class 1
for (int i = 0; i < NTRAINING_SAMPLES; ++i)
for (int i = 0; i < NTRAINING_SAMPLES; i++)
{
px = trainData.at<float>(i,0);
py = trainData.at<float>(i,1);
circle(I, Point( (int) px, (int) py ), 3, Scalar(0, 255, 0), thick, lineType);
circle(I, Point( (int) px, (int) py ), 3, Scalar(0, 255, 0), thick);
}
// Class 2
for (int i = NTRAINING_SAMPLES; i <2*NTRAINING_SAMPLES; ++i)
for (int i = NTRAINING_SAMPLES; i <2*NTRAINING_SAMPLES; i++)
{
px = trainData.at<float>(i,0);
py = trainData.at<float>(i,1);
circle(I, Point( (int) px, (int) py ), 3, Scalar(255, 0, 0), thick, lineType);
circle(I, Point( (int) px, (int) py ), 3, Scalar(255, 0, 0), thick);
}
//! [show_data]
//------------------------- 6. Show support vectors --------------------------------------------
//! [show_vectors]
thick = 2;
lineType = 8;
Mat sv = svm->getUncompressedSupportVectors();
for (int i = 0; i < sv.rows; ++i)
for (int i = 0; i < sv.rows; i++)
{
const float* v = sv.ptr<float>(i);
circle( I, Point( (int) v[0], (int) v[1]), 6, Scalar(128, 128, 128), thick, lineType);
circle(I, Point( (int) v[0], (int) v[1]), 6, Scalar(128, 128, 128), thick);
}
//! [show_vectors]
imwrite("result.png", I); // save the Image
imwrite("result.png", I); // save the Image
imshow("SVM for Non-Linear Training Data", I); // show it to the user
waitKey(0);
waitKey();
return 0;
}

View File

@ -0,0 +1,144 @@
import java.util.ArrayList;
import java.util.List;
import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.core.MatOfPoint;
import org.opencv.core.Point;
import org.opencv.core.Scalar;
import org.opencv.highgui.HighGui;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
//This program demonstrates how to use OpenCV PCA to extract the orientation of an object.
class IntroductionToPCA {
private void drawAxis(Mat img, Point p_, Point q_, Scalar colour, float scale) {
Point p = new Point(p_.x, p_.y);
Point q = new Point(q_.x, q_.y);
//! [visualization1]
double angle = Math.atan2(p.y - q.y, p.x - q.x); // angle in radians
double hypotenuse = Math.sqrt((p.y - q.y) * (p.y - q.y) + (p.x - q.x) * (p.x - q.x));
// Here we lengthen the arrow by a factor of scale
q.x = (int) (p.x - scale * hypotenuse * Math.cos(angle));
q.y = (int) (p.y - scale * hypotenuse * Math.sin(angle));
Imgproc.line(img, p, q, colour, 1, Core.LINE_AA, 0);
// create the arrow hooks
p.x = (int) (q.x + 9 * Math.cos(angle + Math.PI / 4));
p.y = (int) (q.y + 9 * Math.sin(angle + Math.PI / 4));
Imgproc.line(img, p, q, colour, 1, Core.LINE_AA, 0);
p.x = (int) (q.x + 9 * Math.cos(angle - Math.PI / 4));
p.y = (int) (q.y + 9 * Math.sin(angle - Math.PI / 4));
Imgproc.line(img, p, q, colour, 1, Core.LINE_AA, 0);
//! [visualization1]
}
private double getOrientation(MatOfPoint ptsMat, Mat img) {
List<Point> pts = ptsMat.toList();
//! [pca]
// Construct a buffer used by the pca analysis
int sz = pts.size();
Mat dataPts = new Mat(sz, 2, CvType.CV_64F);
double[] dataPtsData = new double[(int) (dataPts.total() * dataPts.channels())];
for (int i = 0; i < dataPts.rows(); i++) {
dataPtsData[i * dataPts.cols()] = pts.get(i).x;
dataPtsData[i * dataPts.cols() + 1] = pts.get(i).y;
}
dataPts.put(0, 0, dataPtsData);
// Perform PCA analysis
Mat mean = new Mat();
Mat eigenvectors = new Mat();
Mat eigenvalues = new Mat();
Core.PCACompute2(dataPts, mean, eigenvectors, eigenvalues);
double[] meanData = new double[(int) (mean.total() * mean.channels())];
mean.get(0, 0, meanData);
// Store the center of the object
Point cntr = new Point(meanData[0], meanData[1]);
// Store the eigenvalues and eigenvectors
double[] eigenvectorsData = new double[(int) (eigenvectors.total() * eigenvectors.channels())];
double[] eigenvaluesData = new double[(int) (eigenvalues.total() * eigenvalues.channels())];
eigenvectors.get(0, 0, eigenvectorsData);
eigenvalues.get(0, 0, eigenvaluesData);
//! [pca]
//! [visualization]
// Draw the principal components
Imgproc.circle(img, cntr, 3, new Scalar(255, 0, 255), 2);
Point p1 = new Point(cntr.x + 0.02 * eigenvectorsData[0] * eigenvaluesData[0],
cntr.y + 0.02 * eigenvectorsData[1] * eigenvaluesData[0]);
Point p2 = new Point(cntr.x - 0.02 * eigenvectorsData[2] * eigenvaluesData[1],
cntr.y - 0.02 * eigenvectorsData[3] * eigenvaluesData[1]);
drawAxis(img, cntr, p1, new Scalar(0, 255, 0), 1);
drawAxis(img, cntr, p2, new Scalar(255, 255, 0), 5);
double angle = Math.atan2(eigenvectorsData[1], eigenvectorsData[0]); // orientation in radians
//! [visualization]
return angle;
}
public void run(String[] args) {
//! [pre-process]
// Load image
String filename = args.length > 0 ? args[0] : "../data/pca_test1.jpg";
Mat src = Imgcodecs.imread(filename);
// Check if image is loaded successfully
if (src.empty()) {
System.err.println("Cannot read image: " + filename);
System.exit(0);
}
Mat srcOriginal = src.clone();
HighGui.imshow("src", srcOriginal);
// Convert image to grayscale
Mat gray = new Mat();
Imgproc.cvtColor(src, gray, Imgproc.COLOR_BGR2GRAY);
// Convert image to binary
Mat bw = new Mat();
Imgproc.threshold(gray, bw, 50, 255, Imgproc.THRESH_BINARY | Imgproc.THRESH_OTSU);
//! [pre-process]
//! [contours]
// Find all the contours in the thresholded image
List<MatOfPoint> contours = new ArrayList<>();
Mat hierarchy = new Mat();
Imgproc.findContours(bw, contours, hierarchy, Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_NONE);
for (int i = 0; i < contours.size(); i++) {
// Calculate the area of each contour
double area = Imgproc.contourArea(contours.get(i));
// Ignore contours that are too small or too large
if (area < 1e2 || 1e5 < area)
continue;
// Draw each contour only for visualisation purposes
Imgproc.drawContours(src, contours, i, new Scalar(0, 0, 255), 2);
// Find the orientation of each shape
getOrientation(contours.get(i), src);
}
//! [contours]
HighGui.imshow("output", src);
HighGui.waitKey();
System.exit(0);
}
}
public class IntroductionToPCADemo {
public static void main(String[] args) {
// Load the native OpenCV library
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
new IntroductionToPCA().run(args);
}
}

View File

@ -0,0 +1,99 @@
import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.core.Point;
import org.opencv.core.Scalar;
import org.opencv.core.TermCriteria;
import org.opencv.highgui.HighGui;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
import org.opencv.ml.Ml;
import org.opencv.ml.SVM;
public class IntroductionToSVMDemo {
public static void main(String[] args) {
// Load the native OpenCV library
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
// Set up training data
//! [setup1]
int[] labels = { 1, -1, -1, -1 };
float[] trainingData = { 501, 10, 255, 10, 501, 255, 10, 501 };
//! [setup1]
//! [setup2]
Mat trainingDataMat = new Mat(4, 2, CvType.CV_32FC1);
trainingDataMat.put(0, 0, trainingData);
Mat labelsMat = new Mat(4, 1, CvType.CV_32SC1);
labelsMat.put(0, 0, labels);
//! [setup2]
// Train the SVM
//! [init]
SVM svm = SVM.create();
svm.setType(SVM.C_SVC);
svm.setKernel(SVM.LINEAR);
svm.setTermCriteria(new TermCriteria(TermCriteria.MAX_ITER, 100, 1e-6));
//! [init]
//! [train]
svm.train(trainingDataMat, Ml.ROW_SAMPLE, labelsMat);
//! [train]
// Data for visual representation
int width = 512, height = 512;
Mat image = Mat.zeros(height, width, CvType.CV_8UC3);
// Show the decision regions given by the SVM
//! [show]
byte[] imageData = new byte[(int) (image.total() * image.channels())];
Mat sampleMat = new Mat(1, 2, CvType.CV_32F);
float[] sampleMatData = new float[(int) (sampleMat.total() * sampleMat.channels())];
for (int i = 0; i < image.rows(); i++) {
for (int j = 0; j < image.cols(); j++) {
sampleMatData[0] = j;
sampleMatData[1] = i;
sampleMat.put(0, 0, sampleMatData);
float response = svm.predict(sampleMat);
if (response == 1) {
imageData[(i * image.cols() + j) * image.channels()] = 0;
imageData[(i * image.cols() + j) * image.channels() + 1] = (byte) 255;
imageData[(i * image.cols() + j) * image.channels() + 2] = 0;
} else if (response == -1) {
imageData[(i * image.cols() + j) * image.channels()] = (byte) 255;
imageData[(i * image.cols() + j) * image.channels() + 1] = 0;
imageData[(i * image.cols() + j) * image.channels() + 2] = 0;
}
}
}
image.put(0, 0, imageData);
//! [show]
// Show the training data
//! [show_data]
int thickness = -1;
int lineType = Core.LINE_8;
Imgproc.circle(image, new Point(501, 10), 5, new Scalar(0, 0, 0), thickness, lineType, 0);
Imgproc.circle(image, new Point(255, 10), 5, new Scalar(255, 255, 255), thickness, lineType, 0);
Imgproc.circle(image, new Point(501, 255), 5, new Scalar(255, 255, 255), thickness, lineType, 0);
Imgproc.circle(image, new Point(10, 501), 5, new Scalar(255, 255, 255), thickness, lineType, 0);
//! [show_data]
// Show support vectors
//! [show_vectors]
thickness = 2;
Mat sv = svm.getUncompressedSupportVectors();
float[] svData = new float[(int) (sv.total() * sv.channels())];
sv.get(0, 0, svData);
for (int i = 0; i < sv.rows(); ++i) {
Imgproc.circle(image, new Point(svData[i * sv.cols()], svData[i * sv.cols() + 1]), 6,
new Scalar(128, 128, 128), thickness, lineType, 0);
}
//! [show_vectors]
Imgcodecs.imwrite("result.png", image); // save the image
HighGui.imshow("SVM Simple Example", image); // show it to the user
HighGui.waitKey();
System.exit(0);
}
}

View File

@ -0,0 +1,186 @@
import java.util.Random;
import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.core.Point;
import org.opencv.core.Scalar;
import org.opencv.core.TermCriteria;
import org.opencv.highgui.HighGui;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
import org.opencv.ml.Ml;
import org.opencv.ml.SVM;
public class NonLinearSVMsDemo {
public static final int NTRAINING_SAMPLES = 100;
public static final float FRAC_LINEAR_SEP = 0.9f;
public static void main(String[] args) {
// Load the native OpenCV library
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
System.out.println("\n--------------------------------------------------------------------------");
System.out.println("This program shows Support Vector Machines for Non-Linearly Separable Data. ");
System.out.println("--------------------------------------------------------------------------\n");
// Data for visual representation
int width = 512, height = 512;
Mat I = Mat.zeros(height, width, CvType.CV_8UC3);
// --------------------- 1. Set up training data randomly---------------------------------------
Mat trainData = new Mat(2 * NTRAINING_SAMPLES, 2, CvType.CV_32F);
Mat labels = new Mat(2 * NTRAINING_SAMPLES, 1, CvType.CV_32S);
Random rng = new Random(100); // Random value generation class
// Set up the linearly separable part of the training data
int nLinearSamples = (int) (FRAC_LINEAR_SEP * NTRAINING_SAMPLES);
//! [setup1]
// Generate random points for the class 1
Mat trainClass = trainData.rowRange(0, nLinearSamples);
// The x coordinate of the points is in [0, 0.4)
Mat c = trainClass.colRange(0, 1);
float[] cData = new float[(int) (c.total() * c.channels())];
double[] cDataDbl = rng.doubles(cData.length, 0, 0.4f * width).toArray();
for (int i = 0; i < cData.length; i++) {
cData[i] = (float) cDataDbl[i];
}
c.put(0, 0, cData);
// The y coordinate of the points is in [0, 1)
c = trainClass.colRange(1, 2);
cData = new float[(int) (c.total() * c.channels())];
cDataDbl = rng.doubles(cData.length, 0, height).toArray();
for (int i = 0; i < cData.length; i++) {
cData[i] = (float) cDataDbl[i];
}
c.put(0, 0, cData);
// Generate random points for the class 2
trainClass = trainData.rowRange(2 * NTRAINING_SAMPLES - nLinearSamples, 2 * NTRAINING_SAMPLES);
// The x coordinate of the points is in [0.6, 1]
c = trainClass.colRange(0, 1);
cData = new float[(int) (c.total() * c.channels())];
cDataDbl = rng.doubles(cData.length, 0.6 * width, width).toArray();
for (int i = 0; i < cData.length; i++) {
cData[i] = (float) cDataDbl[i];
}
c.put(0, 0, cData);
// The y coordinate of the points is in [0, 1)
c = trainClass.colRange(1, 2);
cData = new float[(int) (c.total() * c.channels())];
cDataDbl = rng.doubles(cData.length, 0, height).toArray();
for (int i = 0; i < cData.length; i++) {
cData[i] = (float) cDataDbl[i];
}
c.put(0, 0, cData);
//! [setup1]
// ------------------ Set up the non-linearly separable part of the training data ---------------
//! [setup2]
// Generate random points for the classes 1 and 2
trainClass = trainData.rowRange(nLinearSamples, 2 * NTRAINING_SAMPLES - nLinearSamples);
// The x coordinate of the points is in [0.4, 0.6)
c = trainClass.colRange(0, 1);
cData = new float[(int) (c.total() * c.channels())];
cDataDbl = rng.doubles(cData.length, 0.4 * width, 0.6 * width).toArray();
for (int i = 0; i < cData.length; i++) {
cData[i] = (float) cDataDbl[i];
}
c.put(0, 0, cData);
// The y coordinate of the points is in [0, 1)
c = trainClass.colRange(1, 2);
cData = new float[(int) (c.total() * c.channels())];
cDataDbl = rng.doubles(cData.length, 0, height).toArray();
for (int i = 0; i < cData.length; i++) {
cData[i] = (float) cDataDbl[i];
}
c.put(0, 0, cData);
//! [setup2]
// ------------------------- Set up the labels for the classes---------------------------------
labels.rowRange(0, NTRAINING_SAMPLES).setTo(new Scalar(1)); // Class 1
labels.rowRange(NTRAINING_SAMPLES, 2 * NTRAINING_SAMPLES).setTo(new Scalar(2)); // Class 2
// ------------------------ 2. Set up the support vector machines parameters--------------------
System.out.println("Starting training process");
//! [init]
SVM svm = SVM.create();
svm.setType(SVM.C_SVC);
svm.setC(0.1);
svm.setKernel(SVM.LINEAR);
svm.setTermCriteria(new TermCriteria(TermCriteria.MAX_ITER, (int) 1e7, 1e-6));
//! [init]
// ------------------------ 3. Train the svm----------------------------------------------------
//! [train]
svm.train(trainData, Ml.ROW_SAMPLE, labels);
//! [train]
System.out.println("Finished training process");
// ------------------------ 4. Show the decision regions----------------------------------------
//! [show]
byte[] IData = new byte[(int) (I.total() * I.channels())];
Mat sampleMat = new Mat(1, 2, CvType.CV_32F);
float[] sampleMatData = new float[(int) (sampleMat.total() * sampleMat.channels())];
for (int i = 0; i < I.rows(); i++) {
for (int j = 0; j < I.cols(); j++) {
sampleMatData[0] = j;
sampleMatData[1] = i;
sampleMat.put(0, 0, sampleMatData);
float response = svm.predict(sampleMat);
if (response == 1) {
IData[(i * I.cols() + j) * I.channels()] = 0;
IData[(i * I.cols() + j) * I.channels() + 1] = 100;
IData[(i * I.cols() + j) * I.channels() + 2] = 0;
} else if (response == 2) {
IData[(i * I.cols() + j) * I.channels()] = 100;
IData[(i * I.cols() + j) * I.channels() + 1] = 0;
IData[(i * I.cols() + j) * I.channels() + 2] = 0;
}
}
}
I.put(0, 0, IData);
//! [show]
// ----------------------- 5. Show the training data--------------------------------------------
//! [show_data]
int thick = -1;
int lineType = Core.LINE_8;
float px, py;
// Class 1
float[] trainDataData = new float[(int) (trainData.total() * trainData.channels())];
trainData.get(0, 0, trainDataData);
for (int i = 0; i < NTRAINING_SAMPLES; i++) {
px = trainDataData[i * trainData.cols()];
py = trainDataData[i * trainData.cols() + 1];
Imgproc.circle(I, new Point(px, py), 3, new Scalar(0, 255, 0), thick, lineType, 0);
}
// Class 2
for (int i = NTRAINING_SAMPLES; i < 2 * NTRAINING_SAMPLES; ++i) {
px = trainDataData[i * trainData.cols()];
py = trainDataData[i * trainData.cols() + 1];
Imgproc.circle(I, new Point(px, py), 3, new Scalar(255, 0, 0), thick, lineType, 0);
}
//! [show_data]
// ------------------------- 6. Show support vectors--------------------------------------------
//! [show_vectors]
thick = 2;
Mat sv = svm.getUncompressedSupportVectors();
float[] svData = new float[(int) (sv.total() * sv.channels())];
sv.get(0, 0, svData);
for (int i = 0; i < sv.rows(); i++) {
Imgproc.circle(I, new Point(svData[i * sv.cols()], svData[i * sv.cols() + 1]), 6, new Scalar(128, 128, 128),
thick, lineType, 0);
}
//! [show_vectors]
Imgcodecs.imwrite("result.png", I); // save the Image
HighGui.imshow("SVM for Non-Linear Training Data", I); // show it to the user
HighGui.waitKey();
System.exit(0);
}
}

View File

@ -25,8 +25,8 @@ def thresh_callback(val):
boundRect = [None]*len(contours)
centers = [None]*len(contours)
radius = [None]*len(contours)
for i in range(len(contours)):
contours_poly[i] = cv.approxPolyDP(contours[i], 3, True)
for i, c in enumerate(contours):
contours_poly[i] = cv.approxPolyDP(c, 3, True)
boundRect[i] = cv.boundingRect(contours_poly[i])
centers[i], radius[i] = cv.minEnclosingCircle(contours_poly[i])
## [allthework]

View File

@ -22,22 +22,22 @@ def thresh_callback(val):
# Find the rotated rectangles and ellipses for each contour
minRect = [None]*len(contours)
minEllipse = [None]*len(contours)
for i in range(len(contours)):
minRect[i] = cv.minAreaRect(contours[i])
if contours[i].shape[0] > 5:
minEllipse[i] = cv.fitEllipse(contours[i])
for i, c in enumerate(contours):
minRect[i] = cv.minAreaRect(c)
if c.shape[0] > 5:
minEllipse[i] = cv.fitEllipse(c)
# Draw contours + rotated rects + ellipses
## [zeroMat]
drawing = np.zeros((canny_output.shape[0], canny_output.shape[1], 3), dtype=np.uint8)
## [zeroMat]
## [forContour]
for i in range(len(contours)):
for i, c in enumerate(contours):
color = (rng.randint(0,256), rng.randint(0,256), rng.randint(0,256))
# contour
cv.drawContours(drawing, contours, i, color)
# ellipse
if contours[i].shape[0] > 5:
if c.shape[0] > 5:
cv.ellipse(drawing, minEllipse[i], color, 2)
# rotated rectangle
box = cv.boxPoints(minRect[i])

View File

@ -0,0 +1,100 @@
from __future__ import print_function
from __future__ import division
import cv2 as cv
import numpy as np
import argparse
from math import atan2, cos, sin, sqrt, pi
def drawAxis(img, p_, q_, colour, scale):
p = list(p_)
q = list(q_)
## [visualization1]
angle = atan2(p[1] - q[1], p[0] - q[0]) # angle in radians
hypotenuse = sqrt((p[1] - q[1]) * (p[1] - q[1]) + (p[0] - q[0]) * (p[0] - q[0]))
# Here we lengthen the arrow by a factor of scale
q[0] = p[0] - scale * hypotenuse * cos(angle)
q[1] = p[1] - scale * hypotenuse * sin(angle)
cv.line(img, (int(p[0]), int(p[1])), (int(q[0]), int(q[1])), colour, 1, cv.LINE_AA)
# create the arrow hooks
p[0] = q[0] + 9 * cos(angle + pi / 4)
p[1] = q[1] + 9 * sin(angle + pi / 4)
cv.line(img, (int(p[0]), int(p[1])), (int(q[0]), int(q[1])), colour, 1, cv.LINE_AA)
p[0] = q[0] + 9 * cos(angle - pi / 4)
p[1] = q[1] + 9 * sin(angle - pi / 4)
cv.line(img, (int(p[0]), int(p[1])), (int(q[0]), int(q[1])), colour, 1, cv.LINE_AA)
## [visualization1]
def getOrientation(pts, img):
## [pca]
# Construct a buffer used by the pca analysis
sz = len(pts)
data_pts = np.empty((sz, 2), dtype=np.float64)
for i in range(data_pts.shape[0]):
data_pts[i,0] = pts[i,0,0]
data_pts[i,1] = pts[i,0,1]
# Perform PCA analysis
mean = np.empty((0))
mean, eigenvectors, eigenvalues = cv.PCACompute2(data_pts, mean)
# Store the center of the object
cntr = (int(mean[0,0]), int(mean[0,1]))
## [pca]
## [visualization]
# Draw the principal components
cv.circle(img, cntr, 3, (255, 0, 255), 2)
p1 = (cntr[0] + 0.02 * eigenvectors[0,0] * eigenvalues[0,0], cntr[1] + 0.02 * eigenvectors[0,1] * eigenvalues[0,0])
p2 = (cntr[0] - 0.02 * eigenvectors[1,0] * eigenvalues[1,0], cntr[1] - 0.02 * eigenvectors[1,1] * eigenvalues[1,0])
drawAxis(img, cntr, p1, (0, 255, 0), 1)
drawAxis(img, cntr, p2, (255, 255, 0), 5)
angle = atan2(eigenvectors[0,1], eigenvectors[0,0]) # orientation in radians
## [visualization]
return angle
## [pre-process]
# Load image
parser = argparse.ArgumentParser(description='Code for Introduction to Principal Component Analysis (PCA) tutorial.\
This program demonstrates how to use OpenCV PCA to extract the orientation of an object.')
parser.add_argument('--input', help='Path to input image.', default='../data/pca_test1.jpg')
args = parser.parse_args()
src = cv.imread(args.input)
# Check if image is loaded successfully
if src is None:
print('Could not open or find the image: ', args.input)
exit(0)
cv.imshow('src', src)
# Convert image to grayscale
gray = cv.cvtColor(src, cv.COLOR_BGR2GRAY)
# Convert image to binary
_, bw = cv.threshold(gray, 50, 255, cv.THRESH_BINARY | cv.THRESH_OTSU)
## [pre-process]
## [contours]
# Find all the contours in the thresholded image
_, contours, _ = cv.findContours(bw, cv.RETR_LIST, cv.CHAIN_APPROX_NONE)
for i, c in enumerate(contours):
# Calculate the area of each contour
area = cv.contourArea(c);
# Ignore contours that are too small or too large
if area < 1e2 or 1e5 < area:
continue
# Draw each contour only for visualisation purposes
cv.drawContours(src, contours, i, (0, 0, 255), 2);
# Find the orientation of each shape
getOrientation(c, src)
## [contours]
cv.imshow('output', src)
cv.waitKey()

View File

@ -0,0 +1,62 @@
import cv2 as cv
import numpy as np
# Set up training data
## [setup1]
labels = np.array([1, -1, -1, -1])
trainingData = np.matrix([[501, 10], [255, 10], [501, 255], [10, 501]], dtype=np.float32)
## [setup1]
# Train the SVM
## [init]
svm = cv.ml.SVM_create()
svm.setType(cv.ml.SVM_C_SVC)
svm.setKernel(cv.ml.SVM_LINEAR)
svm.setTermCriteria((cv.TERM_CRITERIA_MAX_ITER, 100, 1e-6))
## [init]
## [train]
svm.train(trainingData, cv.ml.ROW_SAMPLE, labels)
## [train]
# Data for visual representation
width = 512
height = 512
image = np.zeros((height, width, 3), dtype=np.uint8)
# Show the decision regions given by the SVM
## [show]
green = (0,255,0)
blue = (255,0,0)
for i in range(image.shape[0]):
for j in range(image.shape[1]):
sampleMat = np.matrix([[j,i]], dtype=np.float32)
response = svm.predict(sampleMat)[1]
if response == 1:
image[i,j] = green
elif response == -1:
image[i,j] = blue
## [show]
# Show the training data
## [show_data]
thickness = -1
cv.circle(image, (501, 10), 5, ( 0, 0, 0), thickness)
cv.circle(image, (255, 10), 5, (255, 255, 255), thickness)
cv.circle(image, (501, 255), 5, (255, 255, 255), thickness)
cv.circle(image, ( 10, 501), 5, (255, 255, 255), thickness)
## [show_data]
# Show support vectors
## [show_vectors]
thickness = 2
sv = svm.getUncompressedSupportVectors()
for i in range(sv.shape[0]):
cv.circle(image, (sv[i,0], sv[i,1]), 6, (128, 128, 128), thickness)
## [show_vectors]
cv.imwrite('result.png', image) # save the image
cv.imshow('SVM Simple Example', image) # show it to the user
cv.waitKey()

View File

@ -0,0 +1,117 @@
from __future__ import print_function
import cv2 as cv
import numpy as np
import random as rng
NTRAINING_SAMPLES = 100 # Number of training samples per class
FRAC_LINEAR_SEP = 0.9 # Fraction of samples which compose the linear separable part
# Data for visual representation
WIDTH = 512
HEIGHT = 512
I = np.zeros((HEIGHT, WIDTH, 3), dtype=np.uint8)
# --------------------- 1. Set up training data randomly ---------------------------------------
trainData = np.empty((2*NTRAINING_SAMPLES, 2), dtype=np.float32)
labels = np.empty((2*NTRAINING_SAMPLES, 1), dtype=np.int32)
rng.seed(100) # Random value generation class
# Set up the linearly separable part of the training data
nLinearSamples = int(FRAC_LINEAR_SEP * NTRAINING_SAMPLES)
## [setup1]
# Generate random points for the class 1
trainClass = trainData[0:nLinearSamples,:]
# The x coordinate of the points is in [0, 0.4)
c = trainClass[:,0:1]
c[:] = np.random.uniform(0.0, 0.4 * WIDTH, c.shape)
# The y coordinate of the points is in [0, 1)
c = trainClass[:,1:2]
c[:] = np.random.uniform(0.0, HEIGHT, c.shape)
# Generate random points for the class 2
trainClass = trainData[2*NTRAINING_SAMPLES-nLinearSamples:2*NTRAINING_SAMPLES,:]
# The x coordinate of the points is in [0.6, 1]
c = trainClass[:,0:1]
c[:] = np.random.uniform(0.6*WIDTH, WIDTH, c.shape)
# The y coordinate of the points is in [0, 1)
c = trainClass[:,1:2]
c[:] = np.random.uniform(0.0, HEIGHT, c.shape)
## [setup1]
#------------------ Set up the non-linearly separable part of the training data ---------------
## [setup2]
# Generate random points for the classes 1 and 2
trainClass = trainData[nLinearSamples:2*NTRAINING_SAMPLES-nLinearSamples,:]
# The x coordinate of the points is in [0.4, 0.6)
c = trainClass[:,0:1]
c[:] = np.random.uniform(0.4*WIDTH, 0.6*WIDTH, c.shape)
# The y coordinate of the points is in [0, 1)
c = trainClass[:,1:2]
c[:] = np.random.uniform(0.0, HEIGHT, c.shape)
## [setup2]
#------------------------- Set up the labels for the classes ---------------------------------
labels[0:NTRAINING_SAMPLES,:] = 1 # Class 1
labels[NTRAINING_SAMPLES:2*NTRAINING_SAMPLES,:] = 2 # Class 2
#------------------------ 2. Set up the support vector machines parameters --------------------
print('Starting training process')
## [init]
svm = cv.ml.SVM_create()
svm.setType(cv.ml.SVM_C_SVC)
svm.setC(0.1)
svm.setKernel(cv.ml.SVM_LINEAR)
svm.setTermCriteria((cv.TERM_CRITERIA_MAX_ITER, int(1e7), 1e-6))
## [init]
#------------------------ 3. Train the svm ----------------------------------------------------
## [train]
svm.train(trainData, cv.ml.ROW_SAMPLE, labels)
## [train]
print('Finished training process')
#------------------------ 4. Show the decision regions ----------------------------------------
## [show]
green = (0,100,0)
blue = (100,0,0)
for i in range(I.shape[0]):
for j in range(I.shape[1]):
sampleMat = np.matrix([[j,i]], dtype=np.float32)
response = svm.predict(sampleMat)[1]
if response == 1:
I[i,j] = green
elif response == 2:
I[i,j] = blue
## [show]
#----------------------- 5. Show the training data --------------------------------------------
## [show_data]
thick = -1
# Class 1
for i in range(NTRAINING_SAMPLES):
px = trainData[i,0]
py = trainData[i,1]
cv.circle(I, (px, py), 3, (0, 255, 0), thick)
# Class 2
for i in range(NTRAINING_SAMPLES, 2*NTRAINING_SAMPLES):
px = trainData[i,0]
py = trainData[i,1]
cv.circle(I, (px, py), 3, (255, 0, 0), thick)
## [show_data]
#------------------------- 6. Show support vectors --------------------------------------------
## [show_vectors]
thick = 2
sv = svm.getUncompressedSupportVectors()
for i in range(sv.shape[0]):
cv.circle(I, (sv[i,0], sv[i,1]), 6, (128, 128, 128), thick)
## [show_vectors]
cv.imwrite('result.png', I) # save the Image
cv.imshow('SVM for Non-Linear Training Data', I) # show it to the user
cv.waitKey()