mirror of
https://github.com/opencv/opencv.git
synced 2025-06-07 17:44:04 +08:00
Merge remote-tracking branch 'upstream/3.4' into merge-3.4
This commit is contained in:
commit
01a28db949
@ -22,6 +22,8 @@
|
|||||||
|
|
||||||
<skip_headers>
|
<skip_headers>
|
||||||
opencv2/core/hal/intrin*
|
opencv2/core/hal/intrin*
|
||||||
|
opencv2/core/hal/*macros.*
|
||||||
|
opencv2/core/hal/*.impl.*
|
||||||
opencv2/core/cuda*
|
opencv2/core/cuda*
|
||||||
opencv2/core/opencl*
|
opencv2/core/opencl*
|
||||||
opencv2/core/private*
|
opencv2/core/private*
|
||||||
|
@ -5,44 +5,44 @@ Goal
|
|||||||
----
|
----
|
||||||
|
|
||||||
- In this tutorial, you will learn how to convert images from one color-space to another, like
|
- In this tutorial, you will learn how to convert images from one color-space to another, like
|
||||||
BGR \f$\leftrightarrow\f$ Gray, BGR \f$\leftrightarrow\f$ HSV etc.
|
BGR \f$\leftrightarrow\f$ Gray, BGR \f$\leftrightarrow\f$ HSV, etc.
|
||||||
- In addition to that, we will create an application which extracts a colored object in a video
|
- In addition to that, we will create an application to extract a colored object in a video
|
||||||
- You will learn following functions : **cv.cvtColor()**, **cv.inRange()** etc.
|
- You will learn the following functions: **cv.cvtColor()**, **cv.inRange()**, etc.
|
||||||
|
|
||||||
Changing Color-space
|
Changing Color-space
|
||||||
--------------------
|
--------------------
|
||||||
|
|
||||||
There are more than 150 color-space conversion methods available in OpenCV. But we will look into
|
There are more than 150 color-space conversion methods available in OpenCV. But we will look into
|
||||||
only two which are most widely used ones, BGR \f$\leftrightarrow\f$ Gray and BGR \f$\leftrightarrow\f$ HSV.
|
only two, which are most widely used ones: BGR \f$\leftrightarrow\f$ Gray and BGR \f$\leftrightarrow\f$ HSV.
|
||||||
|
|
||||||
For color conversion, we use the function cv.cvtColor(input_image, flag) where flag determines the
|
For color conversion, we use the function cv.cvtColor(input_image, flag) where flag determines the
|
||||||
type of conversion.
|
type of conversion.
|
||||||
|
|
||||||
For BGR \f$\rightarrow\f$ Gray conversion we use the flags cv.COLOR_BGR2GRAY. Similarly for BGR
|
For BGR \f$\rightarrow\f$ Gray conversion, we use the flag cv.COLOR_BGR2GRAY. Similarly for BGR
|
||||||
\f$\rightarrow\f$ HSV, we use the flag cv.COLOR_BGR2HSV. To get other flags, just run following
|
\f$\rightarrow\f$ HSV, we use the flag cv.COLOR_BGR2HSV. To get other flags, just run following
|
||||||
commands in your Python terminal :
|
commands in your Python terminal:
|
||||||
@code{.py}
|
@code{.py}
|
||||||
>>> import cv2 as cv
|
>>> import cv2 as cv
|
||||||
>>> flags = [i for i in dir(cv) if i.startswith('COLOR_')]
|
>>> flags = [i for i in dir(cv) if i.startswith('COLOR_')]
|
||||||
>>> print( flags )
|
>>> print( flags )
|
||||||
@endcode
|
@endcode
|
||||||
@note For HSV, Hue range is [0,179], Saturation range is [0,255] and Value range is [0,255].
|
@note For HSV, hue range is [0,179], saturation range is [0,255], and value range is [0,255].
|
||||||
Different software use different scales. So if you are comparing OpenCV values with them, you need
|
Different software use different scales. So if you are comparing OpenCV values with them, you need
|
||||||
to normalize these ranges.
|
to normalize these ranges.
|
||||||
|
|
||||||
Object Tracking
|
Object Tracking
|
||||||
---------------
|
---------------
|
||||||
|
|
||||||
Now we know how to convert BGR image to HSV, we can use this to extract a colored object. In HSV, it
|
Now that we know how to convert a BGR image to HSV, we can use this to extract a colored object. In HSV, it
|
||||||
is more easier to represent a color than in BGR color-space. In our application, we will try to extract
|
is easier to represent a color than in BGR color-space. In our application, we will try to extract
|
||||||
a blue colored object. So here is the method:
|
a blue colored object. So here is the method:
|
||||||
|
|
||||||
- Take each frame of the video
|
- Take each frame of the video
|
||||||
- Convert from BGR to HSV color-space
|
- Convert from BGR to HSV color-space
|
||||||
- We threshold the HSV image for a range of blue color
|
- We threshold the HSV image for a range of blue color
|
||||||
- Now extract the blue object alone, we can do whatever on that image we want.
|
- Now extract the blue object alone, we can do whatever we want on that image.
|
||||||
|
|
||||||
Below is the code which are commented in detail :
|
Below is the code which is commented in detail:
|
||||||
@code{.py}
|
@code{.py}
|
||||||
import cv2 as cv
|
import cv2 as cv
|
||||||
import numpy as np
|
import numpy as np
|
||||||
@ -80,18 +80,18 @@ Below image shows tracking of the blue object:
|
|||||||
|
|
||||||

|

|
||||||
|
|
||||||
@note There are some noises in the image. We will see how to remove them in later chapters.
|
@note There is some noise in the image. We will see how to remove it in later chapters.
|
||||||
|
|
||||||
@note This is the simplest method in object tracking. Once you learn functions of contours, you can
|
@note This is the simplest method in object tracking. Once you learn functions of contours, you can
|
||||||
do plenty of things like find centroid of this object and use it to track the object, draw diagrams
|
do plenty of things like find the centroid of an object and use it to track the object, draw diagrams
|
||||||
just by moving your hand in front of camera and many other funny stuffs.
|
just by moving your hand in front of a camera, and other fun stuff.
|
||||||
|
|
||||||
How to find HSV values to track?
|
How to find HSV values to track?
|
||||||
--------------------------------
|
--------------------------------
|
||||||
|
|
||||||
This is a common question found in [stackoverflow.com](http://www.stackoverflow.com). It is very simple and
|
This is a common question found in [stackoverflow.com](http://www.stackoverflow.com). It is very simple and
|
||||||
you can use the same function, cv.cvtColor(). Instead of passing an image, you just pass the BGR
|
you can use the same function, cv.cvtColor(). Instead of passing an image, you just pass the BGR
|
||||||
values you want. For example, to find the HSV value of Green, try following commands in Python
|
values you want. For example, to find the HSV value of Green, try the following commands in a Python
|
||||||
terminal:
|
terminal:
|
||||||
@code{.py}
|
@code{.py}
|
||||||
>>> green = np.uint8([[[0,255,0 ]]])
|
>>> green = np.uint8([[[0,255,0 ]]])
|
||||||
@ -99,7 +99,7 @@ terminal:
|
|||||||
>>> print( hsv_green )
|
>>> print( hsv_green )
|
||||||
[[[ 60 255 255]]]
|
[[[ 60 255 255]]]
|
||||||
@endcode
|
@endcode
|
||||||
Now you take [H-10, 100,100] and [H+10, 255, 255] as lower bound and upper bound respectively. Apart
|
Now you take [H-10, 100,100] and [H+10, 255, 255] as the lower bound and upper bound respectively. Apart
|
||||||
from this method, you can use any image editing tools like GIMP or any online converters to find
|
from this method, you can use any image editing tools like GIMP or any online converters to find
|
||||||
these values, but don't forget to adjust the HSV ranges.
|
these values, but don't forget to adjust the HSV ranges.
|
||||||
|
|
||||||
@ -109,5 +109,5 @@ Additional Resources
|
|||||||
Exercises
|
Exercises
|
||||||
---------
|
---------
|
||||||
|
|
||||||
-# Try to find a way to extract more than one colored objects, for eg, extract red, blue, green
|
-# Try to find a way to extract more than one colored object, for example, extract red, blue, and green
|
||||||
objects simultaneously.
|
objects simultaneously.
|
||||||
|
@ -5,24 +5,24 @@ Goals
|
|||||||
-----
|
-----
|
||||||
|
|
||||||
Learn to:
|
Learn to:
|
||||||
- Blur the images with various low pass filters
|
- Blur images with various low pass filters
|
||||||
- Apply custom-made filters to images (2D convolution)
|
- Apply custom-made filters to images (2D convolution)
|
||||||
|
|
||||||
2D Convolution ( Image Filtering )
|
2D Convolution ( Image Filtering )
|
||||||
----------------------------------
|
----------------------------------
|
||||||
|
|
||||||
As in one-dimensional signals, images also can be filtered with various low-pass filters(LPF),
|
As in one-dimensional signals, images also can be filtered with various low-pass filters (LPF),
|
||||||
high-pass filters(HPF) etc. LPF helps in removing noises, blurring the images etc. HPF filters helps
|
high-pass filters (HPF), etc. LPF helps in removing noise, blurring images, etc. HPF filters help
|
||||||
in finding edges in the images.
|
in finding edges in images.
|
||||||
|
|
||||||
OpenCV provides a function **cv.filter2D()** to convolve a kernel with an image. As an example, we
|
OpenCV provides a function **cv.filter2D()** to convolve a kernel with an image. As an example, we
|
||||||
will try an averaging filter on an image. A 5x5 averaging filter kernel will look like below:
|
will try an averaging filter on an image. A 5x5 averaging filter kernel will look like the below:
|
||||||
|
|
||||||
\f[K = \frac{1}{25} \begin{bmatrix} 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 \end{bmatrix}\f]
|
\f[K = \frac{1}{25} \begin{bmatrix} 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 \end{bmatrix}\f]
|
||||||
|
|
||||||
Operation is like this: keep this kernel above a pixel, add all the 25 pixels below this kernel,
|
The operation works like this: keep this kernel above a pixel, add all the 25 pixels below this kernel,
|
||||||
take its average and replace the central pixel with the new average value. It continues this
|
take the average, and replace the central pixel with the new average value. This operation is continued
|
||||||
operation for all the pixels in the image. Try this code and check the result:
|
for all the pixels in the image. Try this code and check the result:
|
||||||
@code{.py}
|
@code{.py}
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import cv2 as cv
|
import cv2 as cv
|
||||||
@ -47,20 +47,20 @@ Image Blurring (Image Smoothing)
|
|||||||
--------------------------------
|
--------------------------------
|
||||||
|
|
||||||
Image blurring is achieved by convolving the image with a low-pass filter kernel. It is useful for
|
Image blurring is achieved by convolving the image with a low-pass filter kernel. It is useful for
|
||||||
removing noises. It actually removes high frequency content (eg: noise, edges) from the image. So
|
removing noise. It actually removes high frequency content (eg: noise, edges) from the image. So
|
||||||
edges are blurred a little bit in this operation. (Well, there are blurring techniques which doesn't
|
edges are blurred a little bit in this operation (there are also blurring techniques which don't
|
||||||
blur the edges too). OpenCV provides mainly four types of blurring techniques.
|
blur the edges). OpenCV provides four main types of blurring techniques.
|
||||||
|
|
||||||
### 1. Averaging
|
### 1. Averaging
|
||||||
|
|
||||||
This is done by convolving image with a normalized box filter. It simply takes the average of all
|
This is done by convolving an image with a normalized box filter. It simply takes the average of all
|
||||||
the pixels under kernel area and replace the central element. This is done by the function
|
the pixels under the kernel area and replaces the central element. This is done by the function
|
||||||
**cv.blur()** or **cv.boxFilter()**. Check the docs for more details about the kernel. We should
|
**cv.blur()** or **cv.boxFilter()**. Check the docs for more details about the kernel. We should
|
||||||
specify the width and height of kernel. A 3x3 normalized box filter would look like below:
|
specify the width and height of the kernel. A 3x3 normalized box filter would look like the below:
|
||||||
|
|
||||||
\f[K = \frac{1}{9} \begin{bmatrix} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \end{bmatrix}\f]
|
\f[K = \frac{1}{9} \begin{bmatrix} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \end{bmatrix}\f]
|
||||||
|
|
||||||
@note If you don't want to use normalized box filter, use **cv.boxFilter()**. Pass an argument
|
@note If you don't want to use a normalized box filter, use **cv.boxFilter()**. Pass an argument
|
||||||
normalize=False to the function.
|
normalize=False to the function.
|
||||||
|
|
||||||
Check a sample demo below with a kernel of 5x5 size:
|
Check a sample demo below with a kernel of 5x5 size:
|
||||||
@ -85,12 +85,12 @@ Result:
|
|||||||
|
|
||||||
### 2. Gaussian Blurring
|
### 2. Gaussian Blurring
|
||||||
|
|
||||||
In this, instead of box filter, gaussian kernel is used. It is done with the function,
|
In this method, instead of a box filter, a Gaussian kernel is used. It is done with the function,
|
||||||
**cv.GaussianBlur()**. We should specify the width and height of kernel which should be positive
|
**cv.GaussianBlur()**. We should specify the width and height of the kernel which should be positive
|
||||||
and odd. We also should specify the standard deviation in X and Y direction, sigmaX and sigmaY
|
and odd. We also should specify the standard deviation in the X and Y directions, sigmaX and sigmaY
|
||||||
respectively. If only sigmaX is specified, sigmaY is taken as same as sigmaX. If both are given as
|
respectively. If only sigmaX is specified, sigmaY is taken as the same as sigmaX. If both are given as
|
||||||
zeros, they are calculated from kernel size. Gaussian blurring is highly effective in removing
|
zeros, they are calculated from the kernel size. Gaussian blurring is highly effective in removing
|
||||||
gaussian noise from the image.
|
Gaussian noise from an image.
|
||||||
|
|
||||||
If you want, you can create a Gaussian kernel with the function, **cv.getGaussianKernel()**.
|
If you want, you can create a Gaussian kernel with the function, **cv.getGaussianKernel()**.
|
||||||
|
|
||||||
@ -104,14 +104,14 @@ Result:
|
|||||||
|
|
||||||
### 3. Median Blurring
|
### 3. Median Blurring
|
||||||
|
|
||||||
Here, the function **cv.medianBlur()** takes median of all the pixels under kernel area and central
|
Here, the function **cv.medianBlur()** takes the median of all the pixels under the kernel area and the central
|
||||||
element is replaced with this median value. This is highly effective against salt-and-pepper noise
|
element is replaced with this median value. This is highly effective against salt-and-pepper noise
|
||||||
in the images. Interesting thing is that, in the above filters, central element is a newly
|
in an image. Interestingly, in the above filters, the central element is a newly
|
||||||
calculated value which may be a pixel value in the image or a new value. But in median blurring,
|
calculated value which may be a pixel value in the image or a new value. But in median blurring,
|
||||||
central element is always replaced by some pixel value in the image. It reduces the noise
|
the central element is always replaced by some pixel value in the image. It reduces the noise
|
||||||
effectively. Its kernel size should be a positive odd integer.
|
effectively. Its kernel size should be a positive odd integer.
|
||||||
|
|
||||||
In this demo, I added a 50% noise to our original image and applied median blur. Check the result:
|
In this demo, I added a 50% noise to our original image and applied median blurring. Check the result:
|
||||||
@code{.py}
|
@code{.py}
|
||||||
median = cv.medianBlur(img,5)
|
median = cv.medianBlur(img,5)
|
||||||
@endcode
|
@endcode
|
||||||
@ -122,19 +122,19 @@ Result:
|
|||||||
### 4. Bilateral Filtering
|
### 4. Bilateral Filtering
|
||||||
|
|
||||||
**cv.bilateralFilter()** is highly effective in noise removal while keeping edges sharp. But the
|
**cv.bilateralFilter()** is highly effective in noise removal while keeping edges sharp. But the
|
||||||
operation is slower compared to other filters. We already saw that gaussian filter takes the a
|
operation is slower compared to other filters. We already saw that a Gaussian filter takes the
|
||||||
neighbourhood around the pixel and find its gaussian weighted average. This gaussian filter is a
|
neighbourhood around the pixel and finds its Gaussian weighted average. This Gaussian filter is a
|
||||||
function of space alone, that is, nearby pixels are considered while filtering. It doesn't consider
|
function of space alone, that is, nearby pixels are considered while filtering. It doesn't consider
|
||||||
whether pixels have almost same intensity. It doesn't consider whether pixel is an edge pixel or
|
whether pixels have almost the same intensity. It doesn't consider whether a pixel is an edge pixel or
|
||||||
not. So it blurs the edges also, which we don't want to do.
|
not. So it blurs the edges also, which we don't want to do.
|
||||||
|
|
||||||
Bilateral filter also takes a gaussian filter in space, but one more gaussian filter which is a
|
Bilateral filtering also takes a Gaussian filter in space, but one more Gaussian filter which is a
|
||||||
function of pixel difference. Gaussian function of space make sure only nearby pixels are considered
|
function of pixel difference. The Gaussian function of space makes sure that only nearby pixels are considered
|
||||||
for blurring while gaussian function of intensity difference make sure only those pixels with
|
for blurring, while the Gaussian function of intensity difference makes sure that only those pixels with
|
||||||
similar intensity to central pixel is considered for blurring. So it preserves the edges since
|
similar intensities to the central pixel are considered for blurring. So it preserves the edges since
|
||||||
pixels at edges will have large intensity variation.
|
pixels at edges will have large intensity variation.
|
||||||
|
|
||||||
Below samples shows use bilateral filter (For details on arguments, visit docs).
|
The below sample shows use of a bilateral filter (For details on arguments, visit docs).
|
||||||
@code{.py}
|
@code{.py}
|
||||||
blur = cv.bilateralFilter(img,9,75,75)
|
blur = cv.bilateralFilter(img,9,75,75)
|
||||||
@endcode
|
@endcode
|
||||||
@ -142,7 +142,7 @@ Result:
|
|||||||
|
|
||||||

|

|
||||||
|
|
||||||
See, the texture on the surface is gone, but edges are still preserved.
|
See, the texture on the surface is gone, but the edges are still preserved.
|
||||||
|
|
||||||
Additional Resources
|
Additional Resources
|
||||||
--------------------
|
--------------------
|
||||||
|
@ -4,7 +4,7 @@ Geometric Transformations of Images {#tutorial_py_geometric_transformations}
|
|||||||
Goals
|
Goals
|
||||||
-----
|
-----
|
||||||
|
|
||||||
- Learn to apply different geometric transformation to images like translation, rotation, affine
|
- Learn to apply different geometric transformations to images, like translation, rotation, affine
|
||||||
transformation etc.
|
transformation etc.
|
||||||
- You will see these functions: **cv.getPerspectiveTransform**
|
- You will see these functions: **cv.getPerspectiveTransform**
|
||||||
|
|
||||||
@ -12,7 +12,7 @@ Transformations
|
|||||||
---------------
|
---------------
|
||||||
|
|
||||||
OpenCV provides two transformation functions, **cv.warpAffine** and **cv.warpPerspective**, with
|
OpenCV provides two transformation functions, **cv.warpAffine** and **cv.warpPerspective**, with
|
||||||
which you can have all kinds of transformations. **cv.warpAffine** takes a 2x3 transformation
|
which you can perform all kinds of transformations. **cv.warpAffine** takes a 2x3 transformation
|
||||||
matrix while **cv.warpPerspective** takes a 3x3 transformation matrix as input.
|
matrix while **cv.warpPerspective** takes a 3x3 transformation matrix as input.
|
||||||
|
|
||||||
### Scaling
|
### Scaling
|
||||||
@ -21,8 +21,8 @@ Scaling is just resizing of the image. OpenCV comes with a function **cv.resize(
|
|||||||
purpose. The size of the image can be specified manually, or you can specify the scaling factor.
|
purpose. The size of the image can be specified manually, or you can specify the scaling factor.
|
||||||
Different interpolation methods are used. Preferable interpolation methods are **cv.INTER_AREA**
|
Different interpolation methods are used. Preferable interpolation methods are **cv.INTER_AREA**
|
||||||
for shrinking and **cv.INTER_CUBIC** (slow) & **cv.INTER_LINEAR** for zooming. By default,
|
for shrinking and **cv.INTER_CUBIC** (slow) & **cv.INTER_LINEAR** for zooming. By default,
|
||||||
interpolation method used is **cv.INTER_LINEAR** for all resizing purposes. You can resize an
|
the interpolation method **cv.INTER_LINEAR** is used for all resizing purposes. You can resize an
|
||||||
input image either of following methods:
|
input image with either of following methods:
|
||||||
@code{.py}
|
@code{.py}
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import cv2 as cv
|
import cv2 as cv
|
||||||
@ -38,13 +38,13 @@ res = cv.resize(img,(2*width, 2*height), interpolation = cv.INTER_CUBIC)
|
|||||||
@endcode
|
@endcode
|
||||||
### Translation
|
### Translation
|
||||||
|
|
||||||
Translation is the shifting of object's location. If you know the shift in (x,y) direction, let it
|
Translation is the shifting of an object's location. If you know the shift in the (x,y) direction and let it
|
||||||
be \f$(t_x,t_y)\f$, you can create the transformation matrix \f$\textbf{M}\f$ as follows:
|
be \f$(t_x,t_y)\f$, you can create the transformation matrix \f$\textbf{M}\f$ as follows:
|
||||||
|
|
||||||
\f[M = \begin{bmatrix} 1 & 0 & t_x \\ 0 & 1 & t_y \end{bmatrix}\f]
|
\f[M = \begin{bmatrix} 1 & 0 & t_x \\ 0 & 1 & t_y \end{bmatrix}\f]
|
||||||
|
|
||||||
You can take make it into a Numpy array of type np.float32 and pass it into **cv.warpAffine()**
|
You can take make it into a Numpy array of type np.float32 and pass it into the **cv.warpAffine()**
|
||||||
function. See below example for a shift of (100,50):
|
function. See the below example for a shift of (100,50):
|
||||||
@code{.py}
|
@code{.py}
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import cv2 as cv
|
import cv2 as cv
|
||||||
@ -61,7 +61,7 @@ cv.destroyAllWindows()
|
|||||||
@endcode
|
@endcode
|
||||||
**warning**
|
**warning**
|
||||||
|
|
||||||
Third argument of the **cv.warpAffine()** function is the size of the output image, which should
|
The third argument of the **cv.warpAffine()** function is the size of the output image, which should
|
||||||
be in the form of **(width, height)**. Remember width = number of columns, and height = number of
|
be in the form of **(width, height)**. Remember width = number of columns, and height = number of
|
||||||
rows.
|
rows.
|
||||||
|
|
||||||
@ -76,7 +76,7 @@ Rotation of an image for an angle \f$\theta\f$ is achieved by the transformation
|
|||||||
\f[M = \begin{bmatrix} cos\theta & -sin\theta \\ sin\theta & cos\theta \end{bmatrix}\f]
|
\f[M = \begin{bmatrix} cos\theta & -sin\theta \\ sin\theta & cos\theta \end{bmatrix}\f]
|
||||||
|
|
||||||
But OpenCV provides scaled rotation with adjustable center of rotation so that you can rotate at any
|
But OpenCV provides scaled rotation with adjustable center of rotation so that you can rotate at any
|
||||||
location you prefer. Modified transformation matrix is given by
|
location you prefer. The modified transformation matrix is given by
|
||||||
|
|
||||||
\f[\begin{bmatrix} \alpha & \beta & (1- \alpha ) \cdot center.x - \beta \cdot center.y \\ - \beta & \alpha & \beta \cdot center.x + (1- \alpha ) \cdot center.y \end{bmatrix}\f]
|
\f[\begin{bmatrix} \alpha & \beta & (1- \alpha ) \cdot center.x - \beta \cdot center.y \\ - \beta & \alpha & \beta \cdot center.x + (1- \alpha ) \cdot center.y \end{bmatrix}\f]
|
||||||
|
|
||||||
@ -84,7 +84,7 @@ where:
|
|||||||
|
|
||||||
\f[\begin{array}{l} \alpha = scale \cdot \cos \theta , \\ \beta = scale \cdot \sin \theta \end{array}\f]
|
\f[\begin{array}{l} \alpha = scale \cdot \cos \theta , \\ \beta = scale \cdot \sin \theta \end{array}\f]
|
||||||
|
|
||||||
To find this transformation matrix, OpenCV provides a function, **cv.getRotationMatrix2D**. Check
|
To find this transformation matrix, OpenCV provides a function, **cv.getRotationMatrix2D**. Check out the
|
||||||
below example which rotates the image by 90 degree with respect to center without any scaling.
|
below example which rotates the image by 90 degree with respect to center without any scaling.
|
||||||
@code{.py}
|
@code{.py}
|
||||||
img = cv.imread('messi5.jpg',0)
|
img = cv.imread('messi5.jpg',0)
|
||||||
@ -101,11 +101,11 @@ See the result:
|
|||||||
### Affine Transformation
|
### Affine Transformation
|
||||||
|
|
||||||
In affine transformation, all parallel lines in the original image will still be parallel in the
|
In affine transformation, all parallel lines in the original image will still be parallel in the
|
||||||
output image. To find the transformation matrix, we need three points from input image and their
|
output image. To find the transformation matrix, we need three points from the input image and their
|
||||||
corresponding locations in output image. Then **cv.getAffineTransform** will create a 2x3 matrix
|
corresponding locations in the output image. Then **cv.getAffineTransform** will create a 2x3 matrix
|
||||||
which is to be passed to **cv.warpAffine**.
|
which is to be passed to **cv.warpAffine**.
|
||||||
|
|
||||||
Check below example, and also look at the points I selected (which are marked in Green color):
|
Check the below example, and also look at the points I selected (which are marked in green color):
|
||||||
@code{.py}
|
@code{.py}
|
||||||
img = cv.imread('drawing.png')
|
img = cv.imread('drawing.png')
|
||||||
rows,cols,ch = img.shape
|
rows,cols,ch = img.shape
|
||||||
@ -130,7 +130,7 @@ See the result:
|
|||||||
For perspective transformation, you need a 3x3 transformation matrix. Straight lines will remain
|
For perspective transformation, you need a 3x3 transformation matrix. Straight lines will remain
|
||||||
straight even after the transformation. To find this transformation matrix, you need 4 points on the
|
straight even after the transformation. To find this transformation matrix, you need 4 points on the
|
||||||
input image and corresponding points on the output image. Among these 4 points, 3 of them should not
|
input image and corresponding points on the output image. Among these 4 points, 3 of them should not
|
||||||
be collinear. Then transformation matrix can be found by the function
|
be collinear. Then the transformation matrix can be found by the function
|
||||||
**cv.getPerspectiveTransform**. Then apply **cv.warpPerspective** with this 3x3 transformation
|
**cv.getPerspectiveTransform**. Then apply **cv.warpPerspective** with this 3x3 transformation
|
||||||
matrix.
|
matrix.
|
||||||
|
|
||||||
|
@ -4,13 +4,13 @@ Image Thresholding {#tutorial_py_thresholding}
|
|||||||
Goal
|
Goal
|
||||||
----
|
----
|
||||||
|
|
||||||
- In this tutorial, you will learn Simple thresholding, Adaptive thresholding and Otsu's thresholding.
|
- In this tutorial, you will learn simple thresholding, adaptive thresholding and Otsu's thresholding.
|
||||||
- You will learn the functions **cv.threshold** and **cv.adaptiveThreshold**.
|
- You will learn the functions **cv.threshold** and **cv.adaptiveThreshold**.
|
||||||
|
|
||||||
Simple Thresholding
|
Simple Thresholding
|
||||||
-------------------
|
-------------------
|
||||||
|
|
||||||
Here, the matter is straight forward. For every pixel, the same threshold value is applied.
|
Here, the matter is straight-forward. For every pixel, the same threshold value is applied.
|
||||||
If the pixel value is smaller than the threshold, it is set to 0, otherwise it is set to a maximum value.
|
If the pixel value is smaller than the threshold, it is set to 0, otherwise it is set to a maximum value.
|
||||||
The function **cv.threshold** is used to apply the thresholding.
|
The function **cv.threshold** is used to apply the thresholding.
|
||||||
The first argument is the source image, which **should be a grayscale image**.
|
The first argument is the source image, which **should be a grayscale image**.
|
||||||
@ -65,11 +65,11 @@ Adaptive Thresholding
|
|||||||
|
|
||||||
In the previous section, we used one global value as a threshold.
|
In the previous section, we used one global value as a threshold.
|
||||||
But this might not be good in all cases, e.g. if an image has different lighting conditions in different areas.
|
But this might not be good in all cases, e.g. if an image has different lighting conditions in different areas.
|
||||||
In that case, adaptive thresholding thresholding can help.
|
In that case, adaptive thresholding can help.
|
||||||
Here, the algorithm determines the threshold for a pixel based on a small region around it.
|
Here, the algorithm determines the threshold for a pixel based on a small region around it.
|
||||||
So we get different thresholds for different regions of the same image which gives better results for images with varying illumination.
|
So we get different thresholds for different regions of the same image which gives better results for images with varying illumination.
|
||||||
|
|
||||||
Additionally to the parameters described above, the method cv.adaptiveThreshold three input parameters:
|
In addition to the parameters described above, the method cv.adaptiveThreshold takes three input parameters:
|
||||||
|
|
||||||
The **adaptiveMethod** decides how the threshold value is calculated:
|
The **adaptiveMethod** decides how the threshold value is calculated:
|
||||||
- cv.ADAPTIVE_THRESH_MEAN_C: The threshold value is the mean of the neighbourhood area minus the constant **C**.
|
- cv.ADAPTIVE_THRESH_MEAN_C: The threshold value is the mean of the neighbourhood area minus the constant **C**.
|
||||||
@ -168,8 +168,8 @@ Result:
|
|||||||
|
|
||||||
### How does Otsu's Binarization work?
|
### How does Otsu's Binarization work?
|
||||||
|
|
||||||
This section demonstrates a Python implementation of Otsu's binarization to show how it works
|
This section demonstrates a Python implementation of Otsu's binarization to show how it actually
|
||||||
actually. If you are not interested, you can skip this.
|
works. If you are not interested, you can skip this.
|
||||||
|
|
||||||
Since we are working with bimodal images, Otsu's algorithm tries to find a threshold value (t) which
|
Since we are working with bimodal images, Otsu's algorithm tries to find a threshold value (t) which
|
||||||
minimizes the **weighted within-class variance** given by the relation:
|
minimizes the **weighted within-class variance** given by the relation:
|
||||||
|
@ -4,7 +4,7 @@ Cross referencing OpenCV from other Doxygen projects {#tutorial_cross_referencin
|
|||||||
Cross referencing OpenCV
|
Cross referencing OpenCV
|
||||||
------------------------
|
------------------------
|
||||||
|
|
||||||
[Doxygen](https://www.stack.nl/~dimitri/doxygen/) is a tool to generate
|
[Doxygen](http://www.doxygen.nl) is a tool to generate
|
||||||
documentations like the OpenCV documentation you are reading right now.
|
documentations like the OpenCV documentation you are reading right now.
|
||||||
It is used by a variety of software projects and if you happen to use it
|
It is used by a variety of software projects and if you happen to use it
|
||||||
to generate your own documentation, and you are using OpenCV inside your
|
to generate your own documentation, and you are using OpenCV inside your
|
||||||
@ -57,5 +57,5 @@ contain a `your_project.tag` file in its root directory.
|
|||||||
References
|
References
|
||||||
----------
|
----------
|
||||||
|
|
||||||
- [Doxygen: Linking to external documentation](https://www.stack.nl/~dimitri/doxygen/manual/external.html)
|
- [Doxygen: Linking to external documentation](http://www.doxygen.nl/manual/external.html)
|
||||||
- [opencv.tag](opencv.tag)
|
- [opencv.tag](opencv.tag)
|
||||||
|
@ -684,12 +684,12 @@ References {#tutorial_documentation_refs}
|
|||||||
- [Command reference] - supported commands and their parameters
|
- [Command reference] - supported commands and their parameters
|
||||||
|
|
||||||
<!-- invisible references list -->
|
<!-- invisible references list -->
|
||||||
[Doxygen]: http://www.stack.nl/~dimitri/doxygen/index.html)
|
[Doxygen]: http://www.doxygen.nl
|
||||||
[Doxygen download]: http://www.stack.nl/~dimitri/doxygen/download.html
|
[Doxygen download]: http://doxygen.nl/download.html
|
||||||
[Doxygen installation]: http://www.stack.nl/~dimitri/doxygen/manual/install.html
|
[Doxygen installation]: http://doxygen.nl/manual/install.html
|
||||||
[Documenting basics]: http://www.stack.nl/~dimitri/doxygen/manual/docblocks.html
|
[Documenting basics]: http://www.doxygen.nl/manual/docblocks.html
|
||||||
[Markdown support]: http://www.stack.nl/~dimitri/doxygen/manual/markdown.html
|
[Markdown support]: http://www.doxygen.nl/manual/markdown.html
|
||||||
[Formulas support]: http://www.stack.nl/~dimitri/doxygen/manual/formulas.html
|
[Formulas support]: http://www.doxygen.nl/manual/formulas.html
|
||||||
[Supported formula commands]: http://docs.mathjax.org/en/latest/tex.html#supported-latex-commands
|
[Supported formula commands]: http://docs.mathjax.org/en/latest/tex.html#supported-latex-commands
|
||||||
[Command reference]: http://www.stack.nl/~dimitri/doxygen/manual/commands.html
|
[Command reference]: http://www.doxygen.nl/manual/commands.html
|
||||||
[Google Scholar]: http://scholar.google.ru/
|
[Google Scholar]: http://scholar.google.ru/
|
||||||
|
@ -9,7 +9,7 @@ Required Packages
|
|||||||
|
|
||||||
### Getting the Cutting-edge OpenCV from Git Repository
|
### Getting the Cutting-edge OpenCV from Git Repository
|
||||||
|
|
||||||
Launch GIT client and clone OpenCV repository from [here](http://github.com/opencv/opencv)
|
Launch Git client and clone OpenCV repository from [GitHub](http://github.com/opencv/opencv).
|
||||||
|
|
||||||
In MacOS it can be done using the following command in Terminal:
|
In MacOS it can be done using the following command in Terminal:
|
||||||
|
|
||||||
@ -18,24 +18,48 @@ cd ~/<my_working _directory>
|
|||||||
git clone https://github.com/opencv/opencv.git
|
git clone https://github.com/opencv/opencv.git
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
|
If you want to install OpenCV’s extra modules, clone the opencv_contrib repository as well:
|
||||||
|
|
||||||
|
@code{.bash}
|
||||||
|
cd ~/<my_working _directory>
|
||||||
|
git clone https://github.com/opencv/opencv_contrib.git
|
||||||
|
@endcode
|
||||||
|
|
||||||
|
|
||||||
Building OpenCV from Source, using CMake and Command Line
|
Building OpenCV from Source, using CMake and Command Line
|
||||||
---------------------------------------------------------
|
---------------------------------------------------------
|
||||||
|
|
||||||
-# Make symbolic link for Xcode to let OpenCV build scripts find the compiler, header files etc.
|
1. Make sure the xcode command line tools are installed:
|
||||||
@code{.bash}
|
@code{.bash}
|
||||||
cd /
|
xcode-select --install
|
||||||
sudo ln -s /Applications/Xcode.app/Contents/Developer Developer
|
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
-# Build OpenCV framework:
|
2. Build OpenCV framework:
|
||||||
@code{.bash}
|
@code{.bash}
|
||||||
cd ~/<my_working_directory>
|
cd ~/<my_working_directory>
|
||||||
python opencv/platforms/ios/build_framework.py ios
|
python opencv/platforms/ios/build_framework.py ios
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
If everything's fine, a few minutes later you will get
|
3. To install OpenCV’s extra modules, append `--contrib opencv_contrib` to the python command above. **Note:** the extra modules are not included in the iOS Pack download at [OpenCV Releases](https://opencv.org/releases/). If you want to use the extra modules (e.g. aruco), you must build OpenCV yourself and include this option:
|
||||||
\~/\<my_working_directory\>/ios/opencv2.framework. You can add this framework to your Xcode
|
@code{.bash}
|
||||||
projects.
|
cd ~/<my_working_directory>
|
||||||
|
python opencv/platforms/ios/build_framework.py ios --contrib opencv_contrib
|
||||||
|
@endcode
|
||||||
|
|
||||||
|
4. To exclude a specific module, append `--without <module_name>`. For example, to exclude the "optflow" module from opencv_contrib:
|
||||||
|
@code{.bash}
|
||||||
|
cd ~/<my_working_directory>
|
||||||
|
python opencv/platforms/ios/build_framework.py ios --contrib opencv_contrib --without optflow
|
||||||
|
@endcode
|
||||||
|
|
||||||
|
5. The build process can take a significant amount of time. Currently (OpenCV 3.4 and 4.1), five separate architectures are built: armv7, armv7s, and arm64 for iOS plus i386 and x86_64 for the iPhone simulator. If you want to specify the architectures to include in the framework, use the `--iphoneos_archs` and/or `--iphonesimulator_archs` options. For example, to only build arm64 for iOS and x86_64 for the simulator:
|
||||||
|
@code{.bash}
|
||||||
|
cd ~/<my_working_directory>
|
||||||
|
python opencv/platforms/ios/build_framework.py ios --contrib opencv_contrib --iphoneos_archs arm64 --iphonesimulator_archs x86_64
|
||||||
|
@endcode
|
||||||
|
|
||||||
|
If everything’s fine, the build process will create
|
||||||
|
`~/<my_working_directory>/ios/opencv2.framework`. You can add this framework to your Xcode projects.
|
||||||
|
|
||||||
Further Reading
|
Further Reading
|
||||||
---------------
|
---------------
|
||||||
|
@ -114,7 +114,7 @@ Additionally you can find very basic sample source code to introduce you to the
|
|||||||
|
|
||||||
_Compatibility:_ \> OpenCV 2.4.2
|
_Compatibility:_ \> OpenCV 2.4.2
|
||||||
|
|
||||||
_Author:_ Artem Myagkov, Eduard Feicho
|
_Author:_ Artem Myagkov, Eduard Feicho, Steve Nicholson
|
||||||
|
|
||||||
We will learn how to setup OpenCV for using it in iOS!
|
We will learn how to setup OpenCV for using it in iOS!
|
||||||
|
|
||||||
|
@ -151,7 +151,7 @@ of them, you need to download and install them on your system.
|
|||||||
image file format.
|
image file format.
|
||||||
- The OpenNI Framework contains a set of open source APIs that provide support for natural interaction with devices via methods such as voice command recognition, hand gestures, and body
|
- The OpenNI Framework contains a set of open source APIs that provide support for natural interaction with devices via methods such as voice command recognition, hand gestures, and body
|
||||||
motion tracking. Prebuilt binaries can be found [here](http://structure.io/openni). The source code of [OpenNI](https://github.com/OpenNI/OpenNI) and [OpenNI2](https://github.com/OpenNI/OpenNI2) are also available on Github.
|
motion tracking. Prebuilt binaries can be found [here](http://structure.io/openni). The source code of [OpenNI](https://github.com/OpenNI/OpenNI) and [OpenNI2](https://github.com/OpenNI/OpenNI2) are also available on Github.
|
||||||
- [Doxygen](http://www.stack.nl/~dimitri/doxygen/) is a documentation generator and is the tool that will actually create the
|
- [Doxygen](http://www.doxygen.nl) is a documentation generator and is the tool that will actually create the
|
||||||
*OpenCV documentation*.
|
*OpenCV documentation*.
|
||||||
|
|
||||||
Now we will describe the steps to follow for a full build (using all the above frameworks, tools and
|
Now we will describe the steps to follow for a full build (using all the above frameworks, tools and
|
||||||
|
@ -20,6 +20,44 @@ CV_EXPORTS_W String dumpInputOutputArray(InputOutputArray argument);
|
|||||||
|
|
||||||
CV_EXPORTS_W String dumpInputOutputArrayOfArrays(InputOutputArrayOfArrays argument);
|
CV_EXPORTS_W String dumpInputOutputArrayOfArrays(InputOutputArrayOfArrays argument);
|
||||||
|
|
||||||
|
CV_WRAP static inline
|
||||||
|
String dumpBool(bool argument)
|
||||||
|
{
|
||||||
|
return (argument) ? String("Bool: True") : String("Bool: False");
|
||||||
|
}
|
||||||
|
|
||||||
|
CV_WRAP static inline
|
||||||
|
String dumpInt(int argument)
|
||||||
|
{
|
||||||
|
return cv::format("Int: %d", argument);
|
||||||
|
}
|
||||||
|
|
||||||
|
CV_WRAP static inline
|
||||||
|
String dumpSizeT(size_t argument)
|
||||||
|
{
|
||||||
|
std::ostringstream oss("size_t: ", std::ios::ate);
|
||||||
|
oss << argument;
|
||||||
|
return oss.str();
|
||||||
|
}
|
||||||
|
|
||||||
|
CV_WRAP static inline
|
||||||
|
String dumpFloat(float argument)
|
||||||
|
{
|
||||||
|
return cv::format("Float: %.2f", argument);
|
||||||
|
}
|
||||||
|
|
||||||
|
CV_WRAP static inline
|
||||||
|
String dumpDouble(double argument)
|
||||||
|
{
|
||||||
|
return cv::format("Double: %.2f", argument);
|
||||||
|
}
|
||||||
|
|
||||||
|
CV_WRAP static inline
|
||||||
|
String dumpCString(const char* argument)
|
||||||
|
{
|
||||||
|
return cv::format("String: %s", argument);
|
||||||
|
}
|
||||||
|
|
||||||
CV_WRAP static inline
|
CV_WRAP static inline
|
||||||
AsyncArray testAsyncArray(InputArray argument)
|
AsyncArray testAsyncArray(InputArray argument)
|
||||||
{
|
{
|
||||||
|
@ -457,10 +457,6 @@ namespace CV__SIMD_NAMESPACE {
|
|||||||
using namespace CV__SIMD_NAMESPACE;
|
using namespace CV__SIMD_NAMESPACE;
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
#ifndef CV_DOXYGEN
|
|
||||||
CV_CPU_OPTIMIZATION_HAL_NAMESPACE_END
|
|
||||||
#endif
|
|
||||||
|
|
||||||
#ifndef CV_SIMD_64F
|
#ifndef CV_SIMD_64F
|
||||||
#define CV_SIMD_64F 0
|
#define CV_SIMD_64F 0
|
||||||
#endif
|
#endif
|
||||||
@ -469,11 +465,16 @@ CV_CPU_OPTIMIZATION_HAL_NAMESPACE_END
|
|||||||
#define CV_SIMD_FP16 0 //!< Defined to 1 on native support of operations with float16x8_t / float16x16_t (SIMD256) types
|
#define CV_SIMD_FP16 0 //!< Defined to 1 on native support of operations with float16x8_t / float16x16_t (SIMD256) types
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
|
||||||
#ifndef CV_SIMD
|
#ifndef CV_SIMD
|
||||||
#define CV_SIMD 0
|
#define CV_SIMD 0
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
#include "simd_utils.impl.hpp"
|
||||||
|
|
||||||
|
#ifndef CV_DOXYGEN
|
||||||
|
CV_CPU_OPTIMIZATION_HAL_NAMESPACE_END
|
||||||
|
#endif
|
||||||
|
|
||||||
} // cv::
|
} // cv::
|
||||||
|
|
||||||
//! @endcond
|
//! @endcond
|
||||||
|
146
modules/core/include/opencv2/core/hal/simd_utils.impl.hpp
Normal file
146
modules/core/include/opencv2/core/hal/simd_utils.impl.hpp
Normal file
@ -0,0 +1,146 @@
|
|||||||
|
// This file is part of OpenCV project.
|
||||||
|
// It is subject to the license terms in the LICENSE file found in the top-level directory
|
||||||
|
// of this distribution and at http://opencv.org/license.html
|
||||||
|
|
||||||
|
// This header is not standalone. Don't include directly, use "intrin.hpp" instead.
|
||||||
|
#ifdef OPENCV_HAL_INTRIN_HPP // defined in intrin.hpp
|
||||||
|
|
||||||
|
|
||||||
|
#if CV_SIMD128 || CV_SIMD128_CPP
|
||||||
|
|
||||||
|
template<typename _T> struct Type2Vec128_Traits;
|
||||||
|
#define CV_INTRIN_DEF_TYPE2VEC128_TRAITS(type_, vec_type_) \
|
||||||
|
template<> struct Type2Vec128_Traits<type_> \
|
||||||
|
{ \
|
||||||
|
typedef vec_type_ vec_type; \
|
||||||
|
}
|
||||||
|
|
||||||
|
CV_INTRIN_DEF_TYPE2VEC128_TRAITS(uchar, v_uint8x16);
|
||||||
|
CV_INTRIN_DEF_TYPE2VEC128_TRAITS(schar, v_int8x16);
|
||||||
|
CV_INTRIN_DEF_TYPE2VEC128_TRAITS(ushort, v_uint16x8);
|
||||||
|
CV_INTRIN_DEF_TYPE2VEC128_TRAITS(short, v_int16x8);
|
||||||
|
CV_INTRIN_DEF_TYPE2VEC128_TRAITS(unsigned, v_uint32x4);
|
||||||
|
CV_INTRIN_DEF_TYPE2VEC128_TRAITS(int, v_int32x4);
|
||||||
|
CV_INTRIN_DEF_TYPE2VEC128_TRAITS(float, v_float32x4);
|
||||||
|
CV_INTRIN_DEF_TYPE2VEC128_TRAITS(uint64, v_uint64x2);
|
||||||
|
CV_INTRIN_DEF_TYPE2VEC128_TRAITS(int64, v_int64x2);
|
||||||
|
#if CV_SIMD128_64F
|
||||||
|
CV_INTRIN_DEF_TYPE2VEC128_TRAITS(double, v_float64x2);
|
||||||
|
#endif
|
||||||
|
|
||||||
|
template<typename _T> static inline
|
||||||
|
typename Type2Vec128_Traits<_T>::vec_type v_setall(const _T& a);
|
||||||
|
|
||||||
|
template<> inline Type2Vec128_Traits< uchar>::vec_type v_setall< uchar>(const uchar& a) { return v_setall_u8(a); }
|
||||||
|
template<> inline Type2Vec128_Traits< schar>::vec_type v_setall< schar>(const schar& a) { return v_setall_s8(a); }
|
||||||
|
template<> inline Type2Vec128_Traits<ushort>::vec_type v_setall<ushort>(const ushort& a) { return v_setall_u16(a); }
|
||||||
|
template<> inline Type2Vec128_Traits< short>::vec_type v_setall< short>(const short& a) { return v_setall_s16(a); }
|
||||||
|
template<> inline Type2Vec128_Traits< uint>::vec_type v_setall< uint>(const uint& a) { return v_setall_u32(a); }
|
||||||
|
template<> inline Type2Vec128_Traits< int>::vec_type v_setall< int>(const int& a) { return v_setall_s32(a); }
|
||||||
|
template<> inline Type2Vec128_Traits<uint64>::vec_type v_setall<uint64>(const uint64& a) { return v_setall_u64(a); }
|
||||||
|
template<> inline Type2Vec128_Traits< int64>::vec_type v_setall< int64>(const int64& a) { return v_setall_s64(a); }
|
||||||
|
template<> inline Type2Vec128_Traits< float>::vec_type v_setall< float>(const float& a) { return v_setall_f32(a); }
|
||||||
|
#if CV_SIMD128_64F
|
||||||
|
template<> inline Type2Vec128_Traits<double>::vec_type v_setall<double>(const double& a) { return v_setall_f64(a); }
|
||||||
|
#endif
|
||||||
|
|
||||||
|
#endif // SIMD128
|
||||||
|
|
||||||
|
|
||||||
|
#if CV_SIMD256
|
||||||
|
|
||||||
|
template<typename _T> struct Type2Vec256_Traits;
|
||||||
|
#define CV_INTRIN_DEF_TYPE2VEC256_TRAITS(type_, vec_type_) \
|
||||||
|
template<> struct Type2Vec256_Traits<type_> \
|
||||||
|
{ \
|
||||||
|
typedef vec_type_ vec_type; \
|
||||||
|
}
|
||||||
|
|
||||||
|
CV_INTRIN_DEF_TYPE2VEC256_TRAITS(uchar, v_uint8x32);
|
||||||
|
CV_INTRIN_DEF_TYPE2VEC256_TRAITS(schar, v_int8x32);
|
||||||
|
CV_INTRIN_DEF_TYPE2VEC256_TRAITS(ushort, v_uint16x16);
|
||||||
|
CV_INTRIN_DEF_TYPE2VEC256_TRAITS(short, v_int16x16);
|
||||||
|
CV_INTRIN_DEF_TYPE2VEC256_TRAITS(unsigned, v_uint32x8);
|
||||||
|
CV_INTRIN_DEF_TYPE2VEC256_TRAITS(int, v_int32x8);
|
||||||
|
CV_INTRIN_DEF_TYPE2VEC256_TRAITS(float, v_float32x8);
|
||||||
|
CV_INTRIN_DEF_TYPE2VEC256_TRAITS(uint64, v_uint64x4);
|
||||||
|
CV_INTRIN_DEF_TYPE2VEC256_TRAITS(int64, v_int64x4);
|
||||||
|
#if CV_SIMD256_64F
|
||||||
|
CV_INTRIN_DEF_TYPE2VEC256_TRAITS(double, v_float64x4);
|
||||||
|
#endif
|
||||||
|
|
||||||
|
template<typename _T> static inline
|
||||||
|
typename Type2Vec256_Traits<_T>::vec_type v256_setall(const _T& a);
|
||||||
|
|
||||||
|
template<> inline Type2Vec256_Traits< uchar>::vec_type v256_setall< uchar>(const uchar& a) { return v256_setall_u8(a); }
|
||||||
|
template<> inline Type2Vec256_Traits< schar>::vec_type v256_setall< schar>(const schar& a) { return v256_setall_s8(a); }
|
||||||
|
template<> inline Type2Vec256_Traits<ushort>::vec_type v256_setall<ushort>(const ushort& a) { return v256_setall_u16(a); }
|
||||||
|
template<> inline Type2Vec256_Traits< short>::vec_type v256_setall< short>(const short& a) { return v256_setall_s16(a); }
|
||||||
|
template<> inline Type2Vec256_Traits< uint>::vec_type v256_setall< uint>(const uint& a) { return v256_setall_u32(a); }
|
||||||
|
template<> inline Type2Vec256_Traits< int>::vec_type v256_setall< int>(const int& a) { return v256_setall_s32(a); }
|
||||||
|
template<> inline Type2Vec256_Traits<uint64>::vec_type v256_setall<uint64>(const uint64& a) { return v256_setall_u64(a); }
|
||||||
|
template<> inline Type2Vec256_Traits< int64>::vec_type v256_setall< int64>(const int64& a) { return v256_setall_s64(a); }
|
||||||
|
template<> inline Type2Vec256_Traits< float>::vec_type v256_setall< float>(const float& a) { return v256_setall_f32(a); }
|
||||||
|
#if CV_SIMD256_64F
|
||||||
|
template<> inline Type2Vec256_Traits<double>::vec_type v256_setall<double>(const double& a) { return v256_setall_f64(a); }
|
||||||
|
#endif
|
||||||
|
|
||||||
|
#endif // SIMD256
|
||||||
|
|
||||||
|
|
||||||
|
#if CV_SIMD512
|
||||||
|
|
||||||
|
template<typename _T> struct Type2Vec512_Traits;
|
||||||
|
#define CV_INTRIN_DEF_TYPE2VEC512_TRAITS(type_, vec_type_) \
|
||||||
|
template<> struct Type2Vec512_Traits<type_> \
|
||||||
|
{ \
|
||||||
|
typedef vec_type_ vec_type; \
|
||||||
|
}
|
||||||
|
|
||||||
|
CV_INTRIN_DEF_TYPE2VEC512_TRAITS(uchar, v_uint8x64);
|
||||||
|
CV_INTRIN_DEF_TYPE2VEC512_TRAITS(schar, v_int8x64);
|
||||||
|
CV_INTRIN_DEF_TYPE2VEC512_TRAITS(ushort, v_uint16x32);
|
||||||
|
CV_INTRIN_DEF_TYPE2VEC512_TRAITS(short, v_int16x32);
|
||||||
|
CV_INTRIN_DEF_TYPE2VEC512_TRAITS(unsigned, v_uint32x16);
|
||||||
|
CV_INTRIN_DEF_TYPE2VEC512_TRAITS(int, v_int32x16);
|
||||||
|
CV_INTRIN_DEF_TYPE2VEC512_TRAITS(float, v_float32x16);
|
||||||
|
CV_INTRIN_DEF_TYPE2VEC512_TRAITS(uint64, v_uint64x8);
|
||||||
|
CV_INTRIN_DEF_TYPE2VEC512_TRAITS(int64, v_int64x8);
|
||||||
|
#if CV_SIMD512_64F
|
||||||
|
CV_INTRIN_DEF_TYPE2VEC512_TRAITS(double, v_float64x8);
|
||||||
|
#endif
|
||||||
|
|
||||||
|
template<typename _T> static inline
|
||||||
|
typename Type2Vec512_Traits<_T>::vec_type v512_setall(const _T& a);
|
||||||
|
|
||||||
|
template<> inline Type2Vec512_Traits< uchar>::vec_type v512_setall< uchar>(const uchar& a) { return v512_setall_u8(a); }
|
||||||
|
template<> inline Type2Vec512_Traits< schar>::vec_type v512_setall< schar>(const schar& a) { return v512_setall_s8(a); }
|
||||||
|
template<> inline Type2Vec512_Traits<ushort>::vec_type v512_setall<ushort>(const ushort& a) { return v512_setall_u16(a); }
|
||||||
|
template<> inline Type2Vec512_Traits< short>::vec_type v512_setall< short>(const short& a) { return v512_setall_s16(a); }
|
||||||
|
template<> inline Type2Vec512_Traits< uint>::vec_type v512_setall< uint>(const uint& a) { return v512_setall_u32(a); }
|
||||||
|
template<> inline Type2Vec512_Traits< int>::vec_type v512_setall< int>(const int& a) { return v512_setall_s32(a); }
|
||||||
|
template<> inline Type2Vec512_Traits<uint64>::vec_type v512_setall<uint64>(const uint64& a) { return v512_setall_u64(a); }
|
||||||
|
template<> inline Type2Vec512_Traits< int64>::vec_type v512_setall< int64>(const int64& a) { return v512_setall_s64(a); }
|
||||||
|
template<> inline Type2Vec512_Traits< float>::vec_type v512_setall< float>(const float& a) { return v512_setall_f32(a); }
|
||||||
|
#if CV_SIMD512_64F
|
||||||
|
template<> inline Type2Vec512_Traits<double>::vec_type v512_setall<double>(const double& a) { return v512_setall_f64(a); }
|
||||||
|
#endif
|
||||||
|
|
||||||
|
#endif // SIMD512
|
||||||
|
|
||||||
|
|
||||||
|
#if CV_SIMD_WIDTH == 16
|
||||||
|
template<typename _T> static inline
|
||||||
|
typename Type2Vec128_Traits<_T>::vec_type vx_setall(const _T& a) { return v_setall(a); }
|
||||||
|
#elif CV_SIMD_WIDTH == 32
|
||||||
|
template<typename _T> static inline
|
||||||
|
typename Type2Vec256_Traits<_T>::vec_type vx_setall(const _T& a) { return v256_setall(a); }
|
||||||
|
#elif CV_SIMD_WIDTH == 64
|
||||||
|
template<typename _T> static inline
|
||||||
|
typename Type2Vec512_Traits<_T>::vec_type vx_setall(const _T& a) { return v512_setall(a); }
|
||||||
|
#else
|
||||||
|
#error "Build configuration error, unsupported CV_SIMD_WIDTH"
|
||||||
|
#endif
|
||||||
|
|
||||||
|
|
||||||
|
#endif // OPENCV_HAL_INTRIN_HPP
|
@ -336,6 +336,40 @@ template<typename R> struct TheTest
|
|||||||
v_float64 vf64 = v_reinterpret_as_f64(r1); out.a.clear(); v_store((double*)out.a.d, vf64); EXPECT_EQ(data.a, out.a);
|
v_float64 vf64 = v_reinterpret_as_f64(r1); out.a.clear(); v_store((double*)out.a.d, vf64); EXPECT_EQ(data.a, out.a);
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
#if CV_SIMD_WIDTH == 16
|
||||||
|
R setall_res1 = v_setall((LaneType)5);
|
||||||
|
R setall_res2 = v_setall<LaneType>(6);
|
||||||
|
#elif CV_SIMD_WIDTH == 32
|
||||||
|
R setall_res1 = v256_setall((LaneType)5);
|
||||||
|
R setall_res2 = v256_setall<LaneType>(6);
|
||||||
|
#elif CV_SIMD_WIDTH == 64
|
||||||
|
R setall_res1 = v512_setall((LaneType)5);
|
||||||
|
R setall_res2 = v512_setall<LaneType>(6);
|
||||||
|
#else
|
||||||
|
#error "Configuration error"
|
||||||
|
#endif
|
||||||
|
#if CV_SIMD_WIDTH > 0
|
||||||
|
Data<R> setall_res1_; v_store(setall_res1_.d, setall_res1);
|
||||||
|
Data<R> setall_res2_; v_store(setall_res2_.d, setall_res2);
|
||||||
|
for (int i = 0; i < R::nlanes; ++i)
|
||||||
|
{
|
||||||
|
SCOPED_TRACE(cv::format("i=%d", i));
|
||||||
|
EXPECT_EQ((LaneType)5, setall_res1_[i]);
|
||||||
|
EXPECT_EQ((LaneType)6, setall_res2_[i]);
|
||||||
|
}
|
||||||
|
#endif
|
||||||
|
|
||||||
|
R vx_setall_res1 = vx_setall((LaneType)11);
|
||||||
|
R vx_setall_res2 = vx_setall<LaneType>(12);
|
||||||
|
Data<R> vx_setall_res1_; v_store(vx_setall_res1_.d, vx_setall_res1);
|
||||||
|
Data<R> vx_setall_res2_; v_store(vx_setall_res2_.d, vx_setall_res2);
|
||||||
|
for (int i = 0; i < R::nlanes; ++i)
|
||||||
|
{
|
||||||
|
SCOPED_TRACE(cv::format("i=%d", i));
|
||||||
|
EXPECT_EQ((LaneType)11, vx_setall_res1_[i]);
|
||||||
|
EXPECT_EQ((LaneType)12, vx_setall_res2_[i]);
|
||||||
|
}
|
||||||
|
|
||||||
return *this;
|
return *this;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1585,7 +1585,7 @@ void TFImporter::populateNet(Net dstNet)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
else if (type == "FusedBatchNorm")
|
else if (type == "FusedBatchNorm" || type == "FusedBatchNormV3")
|
||||||
{
|
{
|
||||||
// op: "FusedBatchNorm"
|
// op: "FusedBatchNorm"
|
||||||
// input: "input"
|
// input: "input"
|
||||||
|
@ -1451,7 +1451,7 @@ public:
|
|||||||
sortSamplesByClasses( _samples, _responses, sidx_all, class_ranges );
|
sortSamplesByClasses( _samples, _responses, sidx_all, class_ranges );
|
||||||
|
|
||||||
//check that while cross-validation there were the samples from all the classes
|
//check that while cross-validation there were the samples from all the classes
|
||||||
if( class_ranges[class_count] <= 0 )
|
if ((int)class_ranges.size() < class_count + 1)
|
||||||
CV_Error( CV_StsBadArg, "While cross-validation one or more of the classes have "
|
CV_Error( CV_StsBadArg, "While cross-validation one or more of the classes have "
|
||||||
"been fell out of the sample. Try to reduce <Params::k_fold>" );
|
"been fell out of the sample. Try to reduce <Params::k_fold>" );
|
||||||
|
|
||||||
|
200
modules/ml/test/test_ann.cpp
Normal file
200
modules/ml/test/test_ann.cpp
Normal file
@ -0,0 +1,200 @@
|
|||||||
|
// This file is part of OpenCV project.
|
||||||
|
// It is subject to the license terms in the LICENSE file found in the top-level directory
|
||||||
|
// of this distribution and at http://opencv.org/license.html.
|
||||||
|
|
||||||
|
#include "test_precomp.hpp"
|
||||||
|
|
||||||
|
// #define GENERATE_TESTDATA
|
||||||
|
|
||||||
|
namespace opencv_test { namespace {
|
||||||
|
|
||||||
|
struct Activation
|
||||||
|
{
|
||||||
|
int id;
|
||||||
|
const char * name;
|
||||||
|
};
|
||||||
|
void PrintTo(const Activation &a, std::ostream *os) { *os << a.name; }
|
||||||
|
|
||||||
|
Activation activation_list[] =
|
||||||
|
{
|
||||||
|
{ ml::ANN_MLP::IDENTITY, "identity" },
|
||||||
|
{ ml::ANN_MLP::SIGMOID_SYM, "sigmoid_sym" },
|
||||||
|
{ ml::ANN_MLP::GAUSSIAN, "gaussian" },
|
||||||
|
{ ml::ANN_MLP::RELU, "relu" },
|
||||||
|
{ ml::ANN_MLP::LEAKYRELU, "leakyrelu" },
|
||||||
|
};
|
||||||
|
|
||||||
|
typedef testing::TestWithParam< Activation > ML_ANN_Params;
|
||||||
|
|
||||||
|
TEST_P(ML_ANN_Params, ActivationFunction)
|
||||||
|
{
|
||||||
|
const Activation &activation = GetParam();
|
||||||
|
const string dataname = "waveform";
|
||||||
|
const string data_path = findDataFile(dataname + ".data");
|
||||||
|
const string model_name = dataname + "_" + activation.name + ".yml";
|
||||||
|
|
||||||
|
Ptr<TrainData> tdata = TrainData::loadFromCSV(data_path, 0);
|
||||||
|
ASSERT_FALSE(tdata.empty());
|
||||||
|
|
||||||
|
// hack?
|
||||||
|
const uint64 old_state = theRNG().state;
|
||||||
|
theRNG().state = 1027401484159173092;
|
||||||
|
tdata->setTrainTestSplit(500);
|
||||||
|
theRNG().state = old_state;
|
||||||
|
|
||||||
|
Mat_<int> layerSizes(1, 4);
|
||||||
|
layerSizes(0, 0) = tdata->getNVars();
|
||||||
|
layerSizes(0, 1) = 100;
|
||||||
|
layerSizes(0, 2) = 100;
|
||||||
|
layerSizes(0, 3) = tdata->getResponses().cols;
|
||||||
|
|
||||||
|
Mat testSamples = tdata->getTestSamples();
|
||||||
|
Mat rx, ry;
|
||||||
|
|
||||||
|
{
|
||||||
|
Ptr<ml::ANN_MLP> x = ml::ANN_MLP::create();
|
||||||
|
x->setActivationFunction(activation.id);
|
||||||
|
x->setLayerSizes(layerSizes);
|
||||||
|
x->setTrainMethod(ml::ANN_MLP::RPROP, 0.01, 0.1);
|
||||||
|
x->setTermCriteria(TermCriteria(TermCriteria::COUNT, 300, 0.01));
|
||||||
|
x->train(tdata, ml::ANN_MLP::NO_OUTPUT_SCALE);
|
||||||
|
ASSERT_TRUE(x->isTrained());
|
||||||
|
x->predict(testSamples, rx);
|
||||||
|
#ifdef GENERATE_TESTDATA
|
||||||
|
x->save(cvtest::TS::ptr()->get_data_path() + model_name);
|
||||||
|
#endif
|
||||||
|
}
|
||||||
|
|
||||||
|
{
|
||||||
|
const string model_path = findDataFile(model_name);
|
||||||
|
Ptr<ml::ANN_MLP> y = Algorithm::load<ANN_MLP>(model_path);
|
||||||
|
ASSERT_TRUE(y);
|
||||||
|
y->predict(testSamples, ry);
|
||||||
|
EXPECT_MAT_NEAR(rx, ry, FLT_EPSILON);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
INSTANTIATE_TEST_CASE_P(/**/, ML_ANN_Params, testing::ValuesIn(activation_list));
|
||||||
|
|
||||||
|
//==================================================================================================
|
||||||
|
|
||||||
|
CV_ENUM(ANN_MLP_METHOD, ANN_MLP::RPROP, ANN_MLP::ANNEAL)
|
||||||
|
|
||||||
|
typedef tuple<ANN_MLP_METHOD, string, int> ML_ANN_METHOD_Params;
|
||||||
|
typedef TestWithParam<ML_ANN_METHOD_Params> ML_ANN_METHOD;
|
||||||
|
|
||||||
|
TEST_P(ML_ANN_METHOD, Test)
|
||||||
|
{
|
||||||
|
int methodType = get<0>(GetParam());
|
||||||
|
string methodName = get<1>(GetParam());
|
||||||
|
int N = get<2>(GetParam());
|
||||||
|
|
||||||
|
String folder = string(cvtest::TS::ptr()->get_data_path());
|
||||||
|
String original_path = findDataFile("waveform.data");
|
||||||
|
string dataname = "waveform_" + methodName;
|
||||||
|
string weight_name = dataname + "_init_weight.yml.gz";
|
||||||
|
string model_name = dataname + ".yml.gz";
|
||||||
|
string response_name = dataname + "_response.yml.gz";
|
||||||
|
|
||||||
|
Ptr<TrainData> tdata2 = TrainData::loadFromCSV(original_path, 0);
|
||||||
|
ASSERT_FALSE(tdata2.empty());
|
||||||
|
|
||||||
|
Mat samples = tdata2->getSamples()(Range(0, N), Range::all());
|
||||||
|
Mat responses(N, 3, CV_32FC1, Scalar(0));
|
||||||
|
for (int i = 0; i < N; i++)
|
||||||
|
responses.at<float>(i, static_cast<int>(tdata2->getResponses().at<float>(i, 0))) = 1;
|
||||||
|
|
||||||
|
Ptr<TrainData> tdata = TrainData::create(samples, ml::ROW_SAMPLE, responses);
|
||||||
|
ASSERT_FALSE(tdata.empty());
|
||||||
|
|
||||||
|
// hack?
|
||||||
|
const uint64 old_state = theRNG().state;
|
||||||
|
theRNG().state = 0;
|
||||||
|
tdata->setTrainTestSplitRatio(0.8);
|
||||||
|
theRNG().state = old_state;
|
||||||
|
|
||||||
|
Mat testSamples = tdata->getTestSamples();
|
||||||
|
|
||||||
|
// train 1st stage
|
||||||
|
|
||||||
|
Ptr<ml::ANN_MLP> xx = ml::ANN_MLP::create();
|
||||||
|
Mat_<int> layerSizes(1, 4);
|
||||||
|
layerSizes(0, 0) = tdata->getNVars();
|
||||||
|
layerSizes(0, 1) = 30;
|
||||||
|
layerSizes(0, 2) = 30;
|
||||||
|
layerSizes(0, 3) = tdata->getResponses().cols;
|
||||||
|
xx->setLayerSizes(layerSizes);
|
||||||
|
xx->setActivationFunction(ml::ANN_MLP::SIGMOID_SYM);
|
||||||
|
xx->setTrainMethod(ml::ANN_MLP::RPROP);
|
||||||
|
xx->setTermCriteria(TermCriteria(TermCriteria::COUNT, 1, 0.01));
|
||||||
|
xx->train(tdata, ml::ANN_MLP::NO_OUTPUT_SCALE + ml::ANN_MLP::NO_INPUT_SCALE);
|
||||||
|
#ifdef GENERATE_TESTDATA
|
||||||
|
{
|
||||||
|
FileStorage fs;
|
||||||
|
fs.open(cvtest::TS::ptr()->get_data_path() + weight_name, FileStorage::WRITE + FileStorage::BASE64);
|
||||||
|
xx->write(fs);
|
||||||
|
}
|
||||||
|
#endif
|
||||||
|
|
||||||
|
// train 2nd stage
|
||||||
|
Mat r_gold;
|
||||||
|
Ptr<ml::ANN_MLP> x = ml::ANN_MLP::create();
|
||||||
|
{
|
||||||
|
const string weight_file = findDataFile(weight_name);
|
||||||
|
FileStorage fs;
|
||||||
|
fs.open(weight_file, FileStorage::READ);
|
||||||
|
x->read(fs.root());
|
||||||
|
}
|
||||||
|
x->setTrainMethod(methodType);
|
||||||
|
if (methodType == ml::ANN_MLP::ANNEAL)
|
||||||
|
{
|
||||||
|
x->setAnnealEnergyRNG(RNG(CV_BIG_INT(0xffffffff)));
|
||||||
|
x->setAnnealInitialT(12);
|
||||||
|
x->setAnnealFinalT(0.15);
|
||||||
|
x->setAnnealCoolingRatio(0.96);
|
||||||
|
x->setAnnealItePerStep(11);
|
||||||
|
}
|
||||||
|
x->setTermCriteria(TermCriteria(TermCriteria::COUNT, 100, 0.01));
|
||||||
|
x->train(tdata, ml::ANN_MLP::NO_OUTPUT_SCALE + ml::ANN_MLP::NO_INPUT_SCALE + ml::ANN_MLP::UPDATE_WEIGHTS);
|
||||||
|
ASSERT_TRUE(x->isTrained());
|
||||||
|
#ifdef GENERATE_TESTDATA
|
||||||
|
x->save(cvtest::TS::ptr()->get_data_path() + model_name);
|
||||||
|
x->predict(testSamples, r_gold);
|
||||||
|
{
|
||||||
|
FileStorage fs_response(cvtest::TS::ptr()->get_data_path() + response_name, FileStorage::WRITE + FileStorage::BASE64);
|
||||||
|
fs_response << "response" << r_gold;
|
||||||
|
}
|
||||||
|
#endif
|
||||||
|
{
|
||||||
|
const string response_file = findDataFile(response_name);
|
||||||
|
FileStorage fs_response(response_file, FileStorage::READ);
|
||||||
|
fs_response["response"] >> r_gold;
|
||||||
|
}
|
||||||
|
ASSERT_FALSE(r_gold.empty());
|
||||||
|
|
||||||
|
// verify
|
||||||
|
const string model_file = findDataFile(model_name);
|
||||||
|
Ptr<ml::ANN_MLP> y = Algorithm::load<ANN_MLP>(model_file);
|
||||||
|
ASSERT_TRUE(y);
|
||||||
|
Mat rx, ry;
|
||||||
|
for (int j = 0; j < 4; j++)
|
||||||
|
{
|
||||||
|
rx = x->getWeights(j);
|
||||||
|
ry = y->getWeights(j);
|
||||||
|
EXPECT_MAT_NEAR(rx, ry, FLT_EPSILON) << "Weights are not equal for layer: " << j;
|
||||||
|
}
|
||||||
|
x->predict(testSamples, rx);
|
||||||
|
y->predict(testSamples, ry);
|
||||||
|
EXPECT_MAT_NEAR(ry, rx, FLT_EPSILON) << "Predict are not equal to result of the saved model";
|
||||||
|
EXPECT_MAT_NEAR(r_gold, rx, FLT_EPSILON) << "Predict are not equal to 'gold' response";
|
||||||
|
}
|
||||||
|
|
||||||
|
INSTANTIATE_TEST_CASE_P(/*none*/, ML_ANN_METHOD,
|
||||||
|
testing::Values(
|
||||||
|
ML_ANN_METHOD_Params(ml::ANN_MLP::RPROP, "rprop", 5000),
|
||||||
|
ML_ANN_METHOD_Params(ml::ANN_MLP::ANNEAL, "anneal", 1000)
|
||||||
|
// ML_ANN_METHOD_Params(ml::ANN_MLP::BACKPROP, "backprop", 500) -----> NO BACKPROP TEST
|
||||||
|
)
|
||||||
|
);
|
||||||
|
|
||||||
|
}} // namespace
|
56
modules/ml/test/test_bayes.cpp
Normal file
56
modules/ml/test/test_bayes.cpp
Normal file
@ -0,0 +1,56 @@
|
|||||||
|
// This file is part of OpenCV project.
|
||||||
|
// It is subject to the license terms in the LICENSE file found in the top-level directory
|
||||||
|
// of this distribution and at http://opencv.org/license.html.
|
||||||
|
|
||||||
|
#include "test_precomp.hpp"
|
||||||
|
|
||||||
|
namespace opencv_test { namespace {
|
||||||
|
|
||||||
|
TEST(ML_NBAYES, regression_5911)
|
||||||
|
{
|
||||||
|
int N=12;
|
||||||
|
Ptr<ml::NormalBayesClassifier> nb = cv::ml::NormalBayesClassifier::create();
|
||||||
|
|
||||||
|
// data:
|
||||||
|
float X_data[] = {
|
||||||
|
1,2,3,4, 1,2,3,4, 1,2,3,4, 1,2,3,4,
|
||||||
|
5,5,5,5, 5,5,5,5, 5,5,5,5, 5,5,5,5,
|
||||||
|
4,3,2,1, 4,3,2,1, 4,3,2,1, 4,3,2,1
|
||||||
|
};
|
||||||
|
Mat_<float> X(N, 4, X_data);
|
||||||
|
|
||||||
|
// labels:
|
||||||
|
int Y_data[] = { 0,0,0,0, 1,1,1,1, 2,2,2,2 };
|
||||||
|
Mat_<int> Y(N, 1, Y_data);
|
||||||
|
|
||||||
|
nb->train(X, ml::ROW_SAMPLE, Y);
|
||||||
|
|
||||||
|
// single prediction:
|
||||||
|
Mat R1,P1;
|
||||||
|
for (int i=0; i<N; i++)
|
||||||
|
{
|
||||||
|
Mat r,p;
|
||||||
|
nb->predictProb(X.row(i), r, p);
|
||||||
|
R1.push_back(r);
|
||||||
|
P1.push_back(p);
|
||||||
|
}
|
||||||
|
|
||||||
|
// bulk prediction (continuous memory):
|
||||||
|
Mat R2,P2;
|
||||||
|
nb->predictProb(X, R2, P2);
|
||||||
|
|
||||||
|
EXPECT_EQ(255 * R2.total(), sum(R1 == R2)[0]);
|
||||||
|
EXPECT_EQ(255 * P2.total(), sum(P1 == P2)[0]);
|
||||||
|
|
||||||
|
// bulk prediction, with non-continuous memory storage
|
||||||
|
Mat R3_(N, 1+1, CV_32S),
|
||||||
|
P3_(N, 3+1, CV_32F);
|
||||||
|
nb->predictProb(X, R3_.col(0), P3_.colRange(0,3));
|
||||||
|
Mat R3 = R3_.col(0).clone(),
|
||||||
|
P3 = P3_.colRange(0,3).clone();
|
||||||
|
|
||||||
|
EXPECT_EQ(255 * R3.total(), sum(R1 == R3)[0]);
|
||||||
|
EXPECT_EQ(255 * P3.total(), sum(P1 == P3)[0]);
|
||||||
|
}
|
||||||
|
|
||||||
|
}} // namespace
|
186
modules/ml/test/test_em.cpp
Normal file
186
modules/ml/test/test_em.cpp
Normal file
@ -0,0 +1,186 @@
|
|||||||
|
// This file is part of OpenCV project.
|
||||||
|
// It is subject to the license terms in the LICENSE file found in the top-level directory
|
||||||
|
// of this distribution and at http://opencv.org/license.html.
|
||||||
|
|
||||||
|
#include "test_precomp.hpp"
|
||||||
|
|
||||||
|
namespace opencv_test { namespace {
|
||||||
|
|
||||||
|
CV_ENUM(EM_START_STEP, EM::START_AUTO_STEP, EM::START_M_STEP, EM::START_E_STEP)
|
||||||
|
CV_ENUM(EM_COV_MAT, EM::COV_MAT_GENERIC, EM::COV_MAT_DIAGONAL, EM::COV_MAT_SPHERICAL)
|
||||||
|
|
||||||
|
typedef testing::TestWithParam< tuple<EM_START_STEP, EM_COV_MAT> > ML_EM_Params;
|
||||||
|
|
||||||
|
TEST_P(ML_EM_Params, accuracy)
|
||||||
|
{
|
||||||
|
const int nclusters = 3;
|
||||||
|
const int sizesArr[] = { 500, 700, 800 };
|
||||||
|
const vector<int> sizes( sizesArr, sizesArr + sizeof(sizesArr) / sizeof(sizesArr[0]) );
|
||||||
|
const int pointsCount = sizesArr[0] + sizesArr[1] + sizesArr[2];
|
||||||
|
Mat means;
|
||||||
|
vector<Mat> covs;
|
||||||
|
defaultDistribs( means, covs, CV_64FC1 );
|
||||||
|
Mat trainData(pointsCount, 2, CV_64FC1 );
|
||||||
|
Mat trainLabels;
|
||||||
|
generateData( trainData, trainLabels, sizes, means, covs, CV_64FC1, CV_32SC1 );
|
||||||
|
Mat testData( pointsCount, 2, CV_64FC1 );
|
||||||
|
Mat testLabels;
|
||||||
|
generateData( testData, testLabels, sizes, means, covs, CV_64FC1, CV_32SC1 );
|
||||||
|
Mat probs(trainData.rows, nclusters, CV_64FC1, cv::Scalar(1));
|
||||||
|
Mat weights(1, nclusters, CV_64FC1, cv::Scalar(1));
|
||||||
|
TermCriteria termCrit(cv::TermCriteria::COUNT + cv::TermCriteria::EPS, 100, FLT_EPSILON);
|
||||||
|
int startStep = get<0>(GetParam());
|
||||||
|
int covMatType = get<1>(GetParam());
|
||||||
|
cv::Mat labels;
|
||||||
|
|
||||||
|
Ptr<EM> em = EM::create();
|
||||||
|
em->setClustersNumber(nclusters);
|
||||||
|
em->setCovarianceMatrixType(covMatType);
|
||||||
|
em->setTermCriteria(termCrit);
|
||||||
|
if( startStep == EM::START_AUTO_STEP )
|
||||||
|
em->trainEM( trainData, noArray(), labels, noArray() );
|
||||||
|
else if( startStep == EM::START_E_STEP )
|
||||||
|
em->trainE( trainData, means, covs, weights, noArray(), labels, noArray() );
|
||||||
|
else if( startStep == EM::START_M_STEP )
|
||||||
|
em->trainM( trainData, probs, noArray(), labels, noArray() );
|
||||||
|
|
||||||
|
{
|
||||||
|
SCOPED_TRACE("Train");
|
||||||
|
float err = 1000;
|
||||||
|
EXPECT_TRUE(calcErr( labels, trainLabels, sizes, err , false, false ));
|
||||||
|
EXPECT_LE(err, 0.008f);
|
||||||
|
}
|
||||||
|
|
||||||
|
{
|
||||||
|
SCOPED_TRACE("Test");
|
||||||
|
float err = 1000;
|
||||||
|
labels.create( testData.rows, 1, CV_32SC1 );
|
||||||
|
for( int i = 0; i < testData.rows; i++ )
|
||||||
|
{
|
||||||
|
Mat sample = testData.row(i);
|
||||||
|
Mat out_probs;
|
||||||
|
labels.at<int>(i) = static_cast<int>(em->predict2( sample, out_probs )[1]);
|
||||||
|
}
|
||||||
|
EXPECT_TRUE(calcErr( labels, testLabels, sizes, err, false, false ));
|
||||||
|
EXPECT_LE(err, 0.008f);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
INSTANTIATE_TEST_CASE_P(/**/, ML_EM_Params,
|
||||||
|
testing::Combine(
|
||||||
|
testing::Values(EM::START_AUTO_STEP, EM::START_M_STEP, EM::START_E_STEP),
|
||||||
|
testing::Values(EM::COV_MAT_GENERIC, EM::COV_MAT_DIAGONAL, EM::COV_MAT_SPHERICAL)
|
||||||
|
));
|
||||||
|
|
||||||
|
//==================================================================================================
|
||||||
|
|
||||||
|
TEST(ML_EM, save_load)
|
||||||
|
{
|
||||||
|
const int nclusters = 2;
|
||||||
|
Mat_<double> samples(3, 1);
|
||||||
|
samples << 1., 2., 3.;
|
||||||
|
|
||||||
|
std::vector<double> firstResult;
|
||||||
|
string filename = cv::tempfile(".xml");
|
||||||
|
{
|
||||||
|
Mat labels;
|
||||||
|
Ptr<EM> em = EM::create();
|
||||||
|
em->setClustersNumber(nclusters);
|
||||||
|
em->trainEM(samples, noArray(), labels, noArray());
|
||||||
|
for( int i = 0; i < samples.rows; i++)
|
||||||
|
{
|
||||||
|
Vec2d res = em->predict2(samples.row(i), noArray());
|
||||||
|
firstResult.push_back(res[1]);
|
||||||
|
}
|
||||||
|
{
|
||||||
|
FileStorage fs = FileStorage(filename, FileStorage::WRITE);
|
||||||
|
ASSERT_NO_THROW(fs << "em" << "{");
|
||||||
|
ASSERT_NO_THROW(em->write(fs));
|
||||||
|
ASSERT_NO_THROW(fs << "}");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
{
|
||||||
|
Ptr<EM> em;
|
||||||
|
ASSERT_NO_THROW(em = Algorithm::load<EM>(filename));
|
||||||
|
for( int i = 0; i < samples.rows; i++)
|
||||||
|
{
|
||||||
|
SCOPED_TRACE(i);
|
||||||
|
Vec2d res = em->predict2(samples.row(i), noArray());
|
||||||
|
EXPECT_DOUBLE_EQ(firstResult[i], res[1]);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
remove(filename.c_str());
|
||||||
|
}
|
||||||
|
|
||||||
|
//==================================================================================================
|
||||||
|
|
||||||
|
TEST(ML_EM, classification)
|
||||||
|
{
|
||||||
|
// This test classifies spam by the following way:
|
||||||
|
// 1. estimates distributions of "spam" / "not spam"
|
||||||
|
// 2. predict classID using Bayes classifier for estimated distributions.
|
||||||
|
string dataFilename = findDataFile("spambase.data");
|
||||||
|
Ptr<TrainData> data = TrainData::loadFromCSV(dataFilename, 0);
|
||||||
|
ASSERT_FALSE(data.empty());
|
||||||
|
|
||||||
|
Mat samples = data->getSamples();
|
||||||
|
ASSERT_EQ(samples.cols, 57);
|
||||||
|
Mat responses = data->getResponses();
|
||||||
|
|
||||||
|
vector<int> trainSamplesMask(samples.rows, 0);
|
||||||
|
const int trainSamplesCount = (int)(0.5f * samples.rows);
|
||||||
|
const int testSamplesCount = samples.rows - trainSamplesCount;
|
||||||
|
for(int i = 0; i < trainSamplesCount; i++)
|
||||||
|
trainSamplesMask[i] = 1;
|
||||||
|
RNG &rng = cv::theRNG();
|
||||||
|
for(size_t i = 0; i < trainSamplesMask.size(); i++)
|
||||||
|
{
|
||||||
|
int i1 = rng(static_cast<unsigned>(trainSamplesMask.size()));
|
||||||
|
int i2 = rng(static_cast<unsigned>(trainSamplesMask.size()));
|
||||||
|
std::swap(trainSamplesMask[i1], trainSamplesMask[i2]);
|
||||||
|
}
|
||||||
|
|
||||||
|
Mat samples0, samples1;
|
||||||
|
for(int i = 0; i < samples.rows; i++)
|
||||||
|
{
|
||||||
|
if(trainSamplesMask[i])
|
||||||
|
{
|
||||||
|
Mat sample = samples.row(i);
|
||||||
|
int resp = (int)responses.at<float>(i);
|
||||||
|
if(resp == 0)
|
||||||
|
samples0.push_back(sample);
|
||||||
|
else
|
||||||
|
samples1.push_back(sample);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ptr<EM> model0 = EM::create();
|
||||||
|
model0->setClustersNumber(3);
|
||||||
|
model0->trainEM(samples0, noArray(), noArray(), noArray());
|
||||||
|
|
||||||
|
Ptr<EM> model1 = EM::create();
|
||||||
|
model1->setClustersNumber(3);
|
||||||
|
model1->trainEM(samples1, noArray(), noArray(), noArray());
|
||||||
|
|
||||||
|
// confusion matrices
|
||||||
|
Mat_<int> trainCM(2, 2, 0);
|
||||||
|
Mat_<int> testCM(2, 2, 0);
|
||||||
|
const double lambda = 1.;
|
||||||
|
for(int i = 0; i < samples.rows; i++)
|
||||||
|
{
|
||||||
|
Mat sample = samples.row(i);
|
||||||
|
double sampleLogLikelihoods0 = model0->predict2(sample, noArray())[0];
|
||||||
|
double sampleLogLikelihoods1 = model1->predict2(sample, noArray())[0];
|
||||||
|
int classID = (sampleLogLikelihoods0 >= lambda * sampleLogLikelihoods1) ? 0 : 1;
|
||||||
|
int resp = (int)responses.at<float>(i);
|
||||||
|
EXPECT_TRUE(resp == 0 || resp == 1);
|
||||||
|
if(trainSamplesMask[i])
|
||||||
|
trainCM(resp, classID)++;
|
||||||
|
else
|
||||||
|
testCM(resp, classID)++;
|
||||||
|
}
|
||||||
|
EXPECT_LE((double)(trainCM(1,0) + trainCM(0,1)) / trainSamplesCount, 0.23);
|
||||||
|
EXPECT_LE((double)(testCM(1,0) + testCM(0,1)) / testSamplesCount, 0.26);
|
||||||
|
}
|
||||||
|
|
||||||
|
}} // namespace
|
@ -1,727 +0,0 @@
|
|||||||
/*M///////////////////////////////////////////////////////////////////////////////////////
|
|
||||||
//
|
|
||||||
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
|
|
||||||
//
|
|
||||||
// By downloading, copying, installing or using the software you agree to this license.
|
|
||||||
// If you do not agree to this license, do not download, install,
|
|
||||||
// copy or use the software.
|
|
||||||
//
|
|
||||||
//
|
|
||||||
// Intel License Agreement
|
|
||||||
// For Open Source Computer Vision Library
|
|
||||||
//
|
|
||||||
// Copyright (C) 2000, Intel Corporation, all rights reserved.
|
|
||||||
// Third party copyrights are property of their respective owners.
|
|
||||||
//
|
|
||||||
// Redistribution and use in source and binary forms, with or without modification,
|
|
||||||
// are permitted provided that the following conditions are met:
|
|
||||||
//
|
|
||||||
// * Redistribution's of source code must retain the above copyright notice,
|
|
||||||
// this list of conditions and the following disclaimer.
|
|
||||||
//
|
|
||||||
// * Redistribution's in binary form must reproduce the above copyright notice,
|
|
||||||
// this list of conditions and the following disclaimer in the documentation
|
|
||||||
// and/or other materials provided with the distribution.
|
|
||||||
//
|
|
||||||
// * The name of Intel Corporation may not be used to endorse or promote products
|
|
||||||
// derived from this software without specific prior written permission.
|
|
||||||
//
|
|
||||||
// This software is provided by the copyright holders and contributors "as is" and
|
|
||||||
// any express or implied warranties, including, but not limited to, the implied
|
|
||||||
// warranties of merchantability and fitness for a particular purpose are disclaimed.
|
|
||||||
// In no event shall the Intel Corporation or contributors be liable for any direct,
|
|
||||||
// indirect, incidental, special, exemplary, or consequential damages
|
|
||||||
// (including, but not limited to, procurement of substitute goods or services;
|
|
||||||
// loss of use, data, or profits; or business interruption) however caused
|
|
||||||
// and on any theory of liability, whether in contract, strict liability,
|
|
||||||
// or tort (including negligence or otherwise) arising in any way out of
|
|
||||||
// the use of this software, even if advised of the possibility of such damage.
|
|
||||||
//
|
|
||||||
//M*/
|
|
||||||
|
|
||||||
#include "test_precomp.hpp"
|
|
||||||
|
|
||||||
namespace opencv_test { namespace {
|
|
||||||
|
|
||||||
using cv::ml::TrainData;
|
|
||||||
using cv::ml::EM;
|
|
||||||
using cv::ml::KNearest;
|
|
||||||
|
|
||||||
void defaultDistribs( Mat& means, vector<Mat>& covs, int type=CV_32FC1 )
|
|
||||||
{
|
|
||||||
CV_TRACE_FUNCTION();
|
|
||||||
float mp0[] = {0.0f, 0.0f}, cp0[] = {0.67f, 0.0f, 0.0f, 0.67f};
|
|
||||||
float mp1[] = {5.0f, 0.0f}, cp1[] = {1.0f, 0.0f, 0.0f, 1.0f};
|
|
||||||
float mp2[] = {1.0f, 5.0f}, cp2[] = {1.0f, 0.0f, 0.0f, 1.0f};
|
|
||||||
means.create(3, 2, type);
|
|
||||||
Mat m0( 1, 2, CV_32FC1, mp0 ), c0( 2, 2, CV_32FC1, cp0 );
|
|
||||||
Mat m1( 1, 2, CV_32FC1, mp1 ), c1( 2, 2, CV_32FC1, cp1 );
|
|
||||||
Mat m2( 1, 2, CV_32FC1, mp2 ), c2( 2, 2, CV_32FC1, cp2 );
|
|
||||||
means.resize(3), covs.resize(3);
|
|
||||||
|
|
||||||
Mat mr0 = means.row(0);
|
|
||||||
m0.convertTo(mr0, type);
|
|
||||||
c0.convertTo(covs[0], type);
|
|
||||||
|
|
||||||
Mat mr1 = means.row(1);
|
|
||||||
m1.convertTo(mr1, type);
|
|
||||||
c1.convertTo(covs[1], type);
|
|
||||||
|
|
||||||
Mat mr2 = means.row(2);
|
|
||||||
m2.convertTo(mr2, type);
|
|
||||||
c2.convertTo(covs[2], type);
|
|
||||||
}
|
|
||||||
|
|
||||||
// generate points sets by normal distributions
|
|
||||||
void generateData( Mat& data, Mat& labels, const vector<int>& sizes, const Mat& _means, const vector<Mat>& covs, int dataType, int labelType )
|
|
||||||
{
|
|
||||||
CV_TRACE_FUNCTION();
|
|
||||||
vector<int>::const_iterator sit = sizes.begin();
|
|
||||||
int total = 0;
|
|
||||||
for( ; sit != sizes.end(); ++sit )
|
|
||||||
total += *sit;
|
|
||||||
CV_Assert( _means.rows == (int)sizes.size() && covs.size() == sizes.size() );
|
|
||||||
CV_Assert( !data.empty() && data.rows == total );
|
|
||||||
CV_Assert( data.type() == dataType );
|
|
||||||
|
|
||||||
labels.create( data.rows, 1, labelType );
|
|
||||||
|
|
||||||
randn( data, Scalar::all(-1.0), Scalar::all(1.0) );
|
|
||||||
vector<Mat> means(sizes.size());
|
|
||||||
for(int i = 0; i < _means.rows; i++)
|
|
||||||
means[i] = _means.row(i);
|
|
||||||
vector<Mat>::const_iterator mit = means.begin(), cit = covs.begin();
|
|
||||||
int bi, ei = 0;
|
|
||||||
sit = sizes.begin();
|
|
||||||
for( int p = 0, l = 0; sit != sizes.end(); ++sit, ++mit, ++cit, l++ )
|
|
||||||
{
|
|
||||||
bi = ei;
|
|
||||||
ei = bi + *sit;
|
|
||||||
assert( mit->rows == 1 && mit->cols == data.cols );
|
|
||||||
assert( cit->rows == data.cols && cit->cols == data.cols );
|
|
||||||
for( int i = bi; i < ei; i++, p++ )
|
|
||||||
{
|
|
||||||
Mat r = data.row(i);
|
|
||||||
r = r * (*cit) + *mit;
|
|
||||||
if( labelType == CV_32FC1 )
|
|
||||||
labels.at<float>(p, 0) = (float)l;
|
|
||||||
else if( labelType == CV_32SC1 )
|
|
||||||
labels.at<int>(p, 0) = l;
|
|
||||||
else
|
|
||||||
{
|
|
||||||
CV_DbgAssert(0);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
int maxIdx( const vector<int>& count )
|
|
||||||
{
|
|
||||||
int idx = -1;
|
|
||||||
int maxVal = -1;
|
|
||||||
vector<int>::const_iterator it = count.begin();
|
|
||||||
for( int i = 0; it != count.end(); ++it, i++ )
|
|
||||||
{
|
|
||||||
if( *it > maxVal)
|
|
||||||
{
|
|
||||||
maxVal = *it;
|
|
||||||
idx = i;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
assert( idx >= 0);
|
|
||||||
return idx;
|
|
||||||
}
|
|
||||||
|
|
||||||
bool getLabelsMap( const Mat& labels, const vector<int>& sizes, vector<int>& labelsMap, bool checkClusterUniq=true )
|
|
||||||
{
|
|
||||||
size_t total = 0, nclusters = sizes.size();
|
|
||||||
for(size_t i = 0; i < sizes.size(); i++)
|
|
||||||
total += sizes[i];
|
|
||||||
|
|
||||||
assert( !labels.empty() );
|
|
||||||
assert( labels.total() == total && (labels.cols == 1 || labels.rows == 1));
|
|
||||||
assert( labels.type() == CV_32SC1 || labels.type() == CV_32FC1 );
|
|
||||||
|
|
||||||
bool isFlt = labels.type() == CV_32FC1;
|
|
||||||
|
|
||||||
labelsMap.resize(nclusters);
|
|
||||||
|
|
||||||
vector<bool> buzy(nclusters, false);
|
|
||||||
int startIndex = 0;
|
|
||||||
for( size_t clusterIndex = 0; clusterIndex < sizes.size(); clusterIndex++ )
|
|
||||||
{
|
|
||||||
vector<int> count( nclusters, 0 );
|
|
||||||
for( int i = startIndex; i < startIndex + sizes[clusterIndex]; i++)
|
|
||||||
{
|
|
||||||
int lbl = isFlt ? (int)labels.at<float>(i) : labels.at<int>(i);
|
|
||||||
CV_Assert(lbl < (int)nclusters);
|
|
||||||
count[lbl]++;
|
|
||||||
CV_Assert(count[lbl] < (int)total);
|
|
||||||
}
|
|
||||||
startIndex += sizes[clusterIndex];
|
|
||||||
|
|
||||||
int cls = maxIdx( count );
|
|
||||||
CV_Assert( !checkClusterUniq || !buzy[cls] );
|
|
||||||
|
|
||||||
labelsMap[clusterIndex] = cls;
|
|
||||||
|
|
||||||
buzy[cls] = true;
|
|
||||||
}
|
|
||||||
|
|
||||||
if(checkClusterUniq)
|
|
||||||
{
|
|
||||||
for(size_t i = 0; i < buzy.size(); i++)
|
|
||||||
if(!buzy[i])
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
|
|
||||||
return true;
|
|
||||||
}
|
|
||||||
|
|
||||||
bool calcErr( const Mat& labels, const Mat& origLabels, const vector<int>& sizes, float& err, bool labelsEquivalent = true, bool checkClusterUniq=true )
|
|
||||||
{
|
|
||||||
err = 0;
|
|
||||||
CV_Assert( !labels.empty() && !origLabels.empty() );
|
|
||||||
CV_Assert( labels.rows == 1 || labels.cols == 1 );
|
|
||||||
CV_Assert( origLabels.rows == 1 || origLabels.cols == 1 );
|
|
||||||
CV_Assert( labels.total() == origLabels.total() );
|
|
||||||
CV_Assert( labels.type() == CV_32SC1 || labels.type() == CV_32FC1 );
|
|
||||||
CV_Assert( origLabels.type() == labels.type() );
|
|
||||||
|
|
||||||
vector<int> labelsMap;
|
|
||||||
bool isFlt = labels.type() == CV_32FC1;
|
|
||||||
if( !labelsEquivalent )
|
|
||||||
{
|
|
||||||
if( !getLabelsMap( labels, sizes, labelsMap, checkClusterUniq ) )
|
|
||||||
return false;
|
|
||||||
|
|
||||||
for( int i = 0; i < labels.rows; i++ )
|
|
||||||
if( isFlt )
|
|
||||||
err += labels.at<float>(i) != labelsMap[(int)origLabels.at<float>(i)] ? 1.f : 0.f;
|
|
||||||
else
|
|
||||||
err += labels.at<int>(i) != labelsMap[origLabels.at<int>(i)] ? 1.f : 0.f;
|
|
||||||
}
|
|
||||||
else
|
|
||||||
{
|
|
||||||
for( int i = 0; i < labels.rows; i++ )
|
|
||||||
if( isFlt )
|
|
||||||
err += labels.at<float>(i) != origLabels.at<float>(i) ? 1.f : 0.f;
|
|
||||||
else
|
|
||||||
err += labels.at<int>(i) != origLabels.at<int>(i) ? 1.f : 0.f;
|
|
||||||
}
|
|
||||||
err /= (float)labels.rows;
|
|
||||||
return true;
|
|
||||||
}
|
|
||||||
|
|
||||||
//--------------------------------------------------------------------------------------------
|
|
||||||
class CV_KMeansTest : public cvtest::BaseTest {
|
|
||||||
public:
|
|
||||||
CV_KMeansTest() {}
|
|
||||||
protected:
|
|
||||||
virtual void run( int start_from );
|
|
||||||
};
|
|
||||||
|
|
||||||
void CV_KMeansTest::run( int /*start_from*/ )
|
|
||||||
{
|
|
||||||
CV_TRACE_FUNCTION();
|
|
||||||
const int iters = 100;
|
|
||||||
int sizesArr[] = { 5000, 7000, 8000 };
|
|
||||||
int pointsCount = sizesArr[0]+ sizesArr[1] + sizesArr[2];
|
|
||||||
|
|
||||||
Mat data( pointsCount, 2, CV_32FC1 ), labels;
|
|
||||||
vector<int> sizes( sizesArr, sizesArr + sizeof(sizesArr) / sizeof(sizesArr[0]) );
|
|
||||||
Mat means;
|
|
||||||
vector<Mat> covs;
|
|
||||||
defaultDistribs( means, covs );
|
|
||||||
generateData( data, labels, sizes, means, covs, CV_32FC1, CV_32SC1 );
|
|
||||||
|
|
||||||
int code = cvtest::TS::OK;
|
|
||||||
float err;
|
|
||||||
Mat bestLabels;
|
|
||||||
// 1. flag==KMEANS_PP_CENTERS
|
|
||||||
kmeans( data, 3, bestLabels, TermCriteria( TermCriteria::COUNT, iters, 0.0), 0, KMEANS_PP_CENTERS, noArray() );
|
|
||||||
if( !calcErr( bestLabels, labels, sizes, err , false ) )
|
|
||||||
{
|
|
||||||
ts->printf( cvtest::TS::LOG, "Bad output labels if flag==KMEANS_PP_CENTERS.\n" );
|
|
||||||
code = cvtest::TS::FAIL_INVALID_OUTPUT;
|
|
||||||
}
|
|
||||||
else if( err > 0.01f )
|
|
||||||
{
|
|
||||||
ts->printf( cvtest::TS::LOG, "Bad accuracy (%f) if flag==KMEANS_PP_CENTERS.\n", err );
|
|
||||||
code = cvtest::TS::FAIL_BAD_ACCURACY;
|
|
||||||
}
|
|
||||||
|
|
||||||
// 2. flag==KMEANS_RANDOM_CENTERS
|
|
||||||
kmeans( data, 3, bestLabels, TermCriteria( TermCriteria::COUNT, iters, 0.0), 0, KMEANS_RANDOM_CENTERS, noArray() );
|
|
||||||
if( !calcErr( bestLabels, labels, sizes, err, false ) )
|
|
||||||
{
|
|
||||||
ts->printf( cvtest::TS::LOG, "Bad output labels if flag==KMEANS_RANDOM_CENTERS.\n" );
|
|
||||||
code = cvtest::TS::FAIL_INVALID_OUTPUT;
|
|
||||||
}
|
|
||||||
else if( err > 0.01f )
|
|
||||||
{
|
|
||||||
ts->printf( cvtest::TS::LOG, "Bad accuracy (%f) if flag==KMEANS_RANDOM_CENTERS.\n", err );
|
|
||||||
code = cvtest::TS::FAIL_BAD_ACCURACY;
|
|
||||||
}
|
|
||||||
|
|
||||||
// 3. flag==KMEANS_USE_INITIAL_LABELS
|
|
||||||
labels.copyTo( bestLabels );
|
|
||||||
RNG rng;
|
|
||||||
for( int i = 0; i < 0.5f * pointsCount; i++ )
|
|
||||||
bestLabels.at<int>( rng.next() % pointsCount, 0 ) = rng.next() % 3;
|
|
||||||
kmeans( data, 3, bestLabels, TermCriteria( TermCriteria::COUNT, iters, 0.0), 0, KMEANS_USE_INITIAL_LABELS, noArray() );
|
|
||||||
if( !calcErr( bestLabels, labels, sizes, err, false ) )
|
|
||||||
{
|
|
||||||
ts->printf( cvtest::TS::LOG, "Bad output labels if flag==KMEANS_USE_INITIAL_LABELS.\n" );
|
|
||||||
code = cvtest::TS::FAIL_INVALID_OUTPUT;
|
|
||||||
}
|
|
||||||
else if( err > 0.01f )
|
|
||||||
{
|
|
||||||
ts->printf( cvtest::TS::LOG, "Bad accuracy (%f) if flag==KMEANS_USE_INITIAL_LABELS.\n", err );
|
|
||||||
code = cvtest::TS::FAIL_BAD_ACCURACY;
|
|
||||||
}
|
|
||||||
|
|
||||||
ts->set_failed_test_info( code );
|
|
||||||
}
|
|
||||||
|
|
||||||
//--------------------------------------------------------------------------------------------
|
|
||||||
class CV_KNearestTest : public cvtest::BaseTest {
|
|
||||||
public:
|
|
||||||
CV_KNearestTest() {}
|
|
||||||
protected:
|
|
||||||
virtual void run( int start_from );
|
|
||||||
};
|
|
||||||
|
|
||||||
void CV_KNearestTest::run( int /*start_from*/ )
|
|
||||||
{
|
|
||||||
int sizesArr[] = { 500, 700, 800 };
|
|
||||||
int pointsCount = sizesArr[0]+ sizesArr[1] + sizesArr[2];
|
|
||||||
|
|
||||||
// train data
|
|
||||||
Mat trainData( pointsCount, 2, CV_32FC1 ), trainLabels;
|
|
||||||
vector<int> sizes( sizesArr, sizesArr + sizeof(sizesArr) / sizeof(sizesArr[0]) );
|
|
||||||
Mat means;
|
|
||||||
vector<Mat> covs;
|
|
||||||
defaultDistribs( means, covs );
|
|
||||||
generateData( trainData, trainLabels, sizes, means, covs, CV_32FC1, CV_32FC1 );
|
|
||||||
|
|
||||||
// test data
|
|
||||||
Mat testData( pointsCount, 2, CV_32FC1 ), testLabels, bestLabels;
|
|
||||||
generateData( testData, testLabels, sizes, means, covs, CV_32FC1, CV_32FC1 );
|
|
||||||
|
|
||||||
int code = cvtest::TS::OK;
|
|
||||||
|
|
||||||
// KNearest default implementation
|
|
||||||
Ptr<KNearest> knearest = KNearest::create();
|
|
||||||
knearest->train(trainData, ml::ROW_SAMPLE, trainLabels);
|
|
||||||
knearest->findNearest(testData, 4, bestLabels);
|
|
||||||
float err;
|
|
||||||
if( !calcErr( bestLabels, testLabels, sizes, err, true ) )
|
|
||||||
{
|
|
||||||
ts->printf( cvtest::TS::LOG, "Bad output labels.\n" );
|
|
||||||
code = cvtest::TS::FAIL_INVALID_OUTPUT;
|
|
||||||
}
|
|
||||||
else if( err > 0.01f )
|
|
||||||
{
|
|
||||||
ts->printf( cvtest::TS::LOG, "Bad accuracy (%f) on test data.\n", err );
|
|
||||||
code = cvtest::TS::FAIL_BAD_ACCURACY;
|
|
||||||
}
|
|
||||||
|
|
||||||
// KNearest KDTree implementation
|
|
||||||
Ptr<KNearest> knearestKdt = KNearest::create();
|
|
||||||
knearestKdt->setAlgorithmType(KNearest::KDTREE);
|
|
||||||
knearestKdt->train(trainData, ml::ROW_SAMPLE, trainLabels);
|
|
||||||
knearestKdt->findNearest(testData, 4, bestLabels);
|
|
||||||
if( !calcErr( bestLabels, testLabels, sizes, err, true ) )
|
|
||||||
{
|
|
||||||
ts->printf( cvtest::TS::LOG, "Bad output labels.\n" );
|
|
||||||
code = cvtest::TS::FAIL_INVALID_OUTPUT;
|
|
||||||
}
|
|
||||||
else if( err > 0.01f )
|
|
||||||
{
|
|
||||||
ts->printf( cvtest::TS::LOG, "Bad accuracy (%f) on test data.\n", err );
|
|
||||||
code = cvtest::TS::FAIL_BAD_ACCURACY;
|
|
||||||
}
|
|
||||||
|
|
||||||
ts->set_failed_test_info( code );
|
|
||||||
}
|
|
||||||
|
|
||||||
class EM_Params
|
|
||||||
{
|
|
||||||
public:
|
|
||||||
EM_Params(int _nclusters=10, int _covMatType=EM::COV_MAT_DIAGONAL, int _startStep=EM::START_AUTO_STEP,
|
|
||||||
const cv::TermCriteria& _termCrit=cv::TermCriteria(cv::TermCriteria::COUNT+cv::TermCriteria::EPS, 100, FLT_EPSILON),
|
|
||||||
const cv::Mat* _probs=0, const cv::Mat* _weights=0,
|
|
||||||
const cv::Mat* _means=0, const std::vector<cv::Mat>* _covs=0)
|
|
||||||
: nclusters(_nclusters), covMatType(_covMatType), startStep(_startStep),
|
|
||||||
probs(_probs), weights(_weights), means(_means), covs(_covs), termCrit(_termCrit)
|
|
||||||
{}
|
|
||||||
|
|
||||||
int nclusters;
|
|
||||||
int covMatType;
|
|
||||||
int startStep;
|
|
||||||
|
|
||||||
// all 4 following matrices should have type CV_32FC1
|
|
||||||
const cv::Mat* probs;
|
|
||||||
const cv::Mat* weights;
|
|
||||||
const cv::Mat* means;
|
|
||||||
const std::vector<cv::Mat>* covs;
|
|
||||||
|
|
||||||
cv::TermCriteria termCrit;
|
|
||||||
};
|
|
||||||
|
|
||||||
//--------------------------------------------------------------------------------------------
|
|
||||||
class CV_EMTest : public cvtest::BaseTest
|
|
||||||
{
|
|
||||||
public:
|
|
||||||
CV_EMTest() {}
|
|
||||||
protected:
|
|
||||||
virtual void run( int start_from );
|
|
||||||
int runCase( int caseIndex, const EM_Params& params,
|
|
||||||
const cv::Mat& trainData, const cv::Mat& trainLabels,
|
|
||||||
const cv::Mat& testData, const cv::Mat& testLabels,
|
|
||||||
const vector<int>& sizes);
|
|
||||||
};
|
|
||||||
|
|
||||||
int CV_EMTest::runCase( int caseIndex, const EM_Params& params,
|
|
||||||
const cv::Mat& trainData, const cv::Mat& trainLabels,
|
|
||||||
const cv::Mat& testData, const cv::Mat& testLabels,
|
|
||||||
const vector<int>& sizes )
|
|
||||||
{
|
|
||||||
int code = cvtest::TS::OK;
|
|
||||||
|
|
||||||
cv::Mat labels;
|
|
||||||
float err;
|
|
||||||
|
|
||||||
Ptr<EM> em = EM::create();
|
|
||||||
em->setClustersNumber(params.nclusters);
|
|
||||||
em->setCovarianceMatrixType(params.covMatType);
|
|
||||||
em->setTermCriteria(params.termCrit);
|
|
||||||
if( params.startStep == EM::START_AUTO_STEP )
|
|
||||||
em->trainEM( trainData, noArray(), labels, noArray() );
|
|
||||||
else if( params.startStep == EM::START_E_STEP )
|
|
||||||
em->trainE( trainData, *params.means, *params.covs,
|
|
||||||
*params.weights, noArray(), labels, noArray() );
|
|
||||||
else if( params.startStep == EM::START_M_STEP )
|
|
||||||
em->trainM( trainData, *params.probs,
|
|
||||||
noArray(), labels, noArray() );
|
|
||||||
|
|
||||||
// check train error
|
|
||||||
if( !calcErr( labels, trainLabels, sizes, err , false, false ) )
|
|
||||||
{
|
|
||||||
ts->printf( cvtest::TS::LOG, "Case index %i : Bad output labels.\n", caseIndex );
|
|
||||||
code = cvtest::TS::FAIL_INVALID_OUTPUT;
|
|
||||||
}
|
|
||||||
else if( err > 0.008f )
|
|
||||||
{
|
|
||||||
ts->printf( cvtest::TS::LOG, "Case index %i : Bad accuracy (%f) on train data.\n", caseIndex, err );
|
|
||||||
code = cvtest::TS::FAIL_BAD_ACCURACY;
|
|
||||||
}
|
|
||||||
|
|
||||||
// check test error
|
|
||||||
labels.create( testData.rows, 1, CV_32SC1 );
|
|
||||||
for( int i = 0; i < testData.rows; i++ )
|
|
||||||
{
|
|
||||||
Mat sample = testData.row(i);
|
|
||||||
Mat probs;
|
|
||||||
labels.at<int>(i) = static_cast<int>(em->predict2( sample, probs )[1]);
|
|
||||||
}
|
|
||||||
if( !calcErr( labels, testLabels, sizes, err, false, false ) )
|
|
||||||
{
|
|
||||||
ts->printf( cvtest::TS::LOG, "Case index %i : Bad output labels.\n", caseIndex );
|
|
||||||
code = cvtest::TS::FAIL_INVALID_OUTPUT;
|
|
||||||
}
|
|
||||||
else if( err > 0.008f )
|
|
||||||
{
|
|
||||||
ts->printf( cvtest::TS::LOG, "Case index %i : Bad accuracy (%f) on test data.\n", caseIndex, err );
|
|
||||||
code = cvtest::TS::FAIL_BAD_ACCURACY;
|
|
||||||
}
|
|
||||||
|
|
||||||
return code;
|
|
||||||
}
|
|
||||||
|
|
||||||
void CV_EMTest::run( int /*start_from*/ )
|
|
||||||
{
|
|
||||||
int sizesArr[] = { 500, 700, 800 };
|
|
||||||
int pointsCount = sizesArr[0]+ sizesArr[1] + sizesArr[2];
|
|
||||||
|
|
||||||
// Points distribution
|
|
||||||
Mat means;
|
|
||||||
vector<Mat> covs;
|
|
||||||
defaultDistribs( means, covs, CV_64FC1 );
|
|
||||||
|
|
||||||
// train data
|
|
||||||
Mat trainData( pointsCount, 2, CV_64FC1 ), trainLabels;
|
|
||||||
vector<int> sizes( sizesArr, sizesArr + sizeof(sizesArr) / sizeof(sizesArr[0]) );
|
|
||||||
generateData( trainData, trainLabels, sizes, means, covs, CV_64FC1, CV_32SC1 );
|
|
||||||
|
|
||||||
// test data
|
|
||||||
Mat testData( pointsCount, 2, CV_64FC1 ), testLabels;
|
|
||||||
generateData( testData, testLabels, sizes, means, covs, CV_64FC1, CV_32SC1 );
|
|
||||||
|
|
||||||
EM_Params params;
|
|
||||||
params.nclusters = 3;
|
|
||||||
Mat probs(trainData.rows, params.nclusters, CV_64FC1, cv::Scalar(1));
|
|
||||||
params.probs = &probs;
|
|
||||||
Mat weights(1, params.nclusters, CV_64FC1, cv::Scalar(1));
|
|
||||||
params.weights = &weights;
|
|
||||||
params.means = &means;
|
|
||||||
params.covs = &covs;
|
|
||||||
|
|
||||||
int code = cvtest::TS::OK;
|
|
||||||
int caseIndex = 0;
|
|
||||||
{
|
|
||||||
params.startStep = EM::START_AUTO_STEP;
|
|
||||||
params.covMatType = EM::COV_MAT_GENERIC;
|
|
||||||
int currCode = runCase(caseIndex++, params, trainData, trainLabels, testData, testLabels, sizes);
|
|
||||||
code = currCode == cvtest::TS::OK ? code : currCode;
|
|
||||||
}
|
|
||||||
{
|
|
||||||
params.startStep = EM::START_AUTO_STEP;
|
|
||||||
params.covMatType = EM::COV_MAT_DIAGONAL;
|
|
||||||
int currCode = runCase(caseIndex++, params, trainData, trainLabels, testData, testLabels, sizes);
|
|
||||||
code = currCode == cvtest::TS::OK ? code : currCode;
|
|
||||||
}
|
|
||||||
{
|
|
||||||
params.startStep = EM::START_AUTO_STEP;
|
|
||||||
params.covMatType = EM::COV_MAT_SPHERICAL;
|
|
||||||
int currCode = runCase(caseIndex++, params, trainData, trainLabels, testData, testLabels, sizes);
|
|
||||||
code = currCode == cvtest::TS::OK ? code : currCode;
|
|
||||||
}
|
|
||||||
{
|
|
||||||
params.startStep = EM::START_M_STEP;
|
|
||||||
params.covMatType = EM::COV_MAT_GENERIC;
|
|
||||||
int currCode = runCase(caseIndex++, params, trainData, trainLabels, testData, testLabels, sizes);
|
|
||||||
code = currCode == cvtest::TS::OK ? code : currCode;
|
|
||||||
}
|
|
||||||
{
|
|
||||||
params.startStep = EM::START_M_STEP;
|
|
||||||
params.covMatType = EM::COV_MAT_DIAGONAL;
|
|
||||||
int currCode = runCase(caseIndex++, params, trainData, trainLabels, testData, testLabels, sizes);
|
|
||||||
code = currCode == cvtest::TS::OK ? code : currCode;
|
|
||||||
}
|
|
||||||
{
|
|
||||||
params.startStep = EM::START_M_STEP;
|
|
||||||
params.covMatType = EM::COV_MAT_SPHERICAL;
|
|
||||||
int currCode = runCase(caseIndex++, params, trainData, trainLabels, testData, testLabels, sizes);
|
|
||||||
code = currCode == cvtest::TS::OK ? code : currCode;
|
|
||||||
}
|
|
||||||
{
|
|
||||||
params.startStep = EM::START_E_STEP;
|
|
||||||
params.covMatType = EM::COV_MAT_GENERIC;
|
|
||||||
int currCode = runCase(caseIndex++, params, trainData, trainLabels, testData, testLabels, sizes);
|
|
||||||
code = currCode == cvtest::TS::OK ? code : currCode;
|
|
||||||
}
|
|
||||||
{
|
|
||||||
params.startStep = EM::START_E_STEP;
|
|
||||||
params.covMatType = EM::COV_MAT_DIAGONAL;
|
|
||||||
int currCode = runCase(caseIndex++, params, trainData, trainLabels, testData, testLabels, sizes);
|
|
||||||
code = currCode == cvtest::TS::OK ? code : currCode;
|
|
||||||
}
|
|
||||||
{
|
|
||||||
params.startStep = EM::START_E_STEP;
|
|
||||||
params.covMatType = EM::COV_MAT_SPHERICAL;
|
|
||||||
int currCode = runCase(caseIndex++, params, trainData, trainLabels, testData, testLabels, sizes);
|
|
||||||
code = currCode == cvtest::TS::OK ? code : currCode;
|
|
||||||
}
|
|
||||||
|
|
||||||
ts->set_failed_test_info( code );
|
|
||||||
}
|
|
||||||
|
|
||||||
class CV_EMTest_SaveLoad : public cvtest::BaseTest {
|
|
||||||
public:
|
|
||||||
CV_EMTest_SaveLoad() {}
|
|
||||||
protected:
|
|
||||||
virtual void run( int /*start_from*/ )
|
|
||||||
{
|
|
||||||
int code = cvtest::TS::OK;
|
|
||||||
const int nclusters = 2;
|
|
||||||
|
|
||||||
Mat samples = Mat(3,1,CV_64FC1);
|
|
||||||
samples.at<double>(0,0) = 1;
|
|
||||||
samples.at<double>(1,0) = 2;
|
|
||||||
samples.at<double>(2,0) = 3;
|
|
||||||
|
|
||||||
Mat labels;
|
|
||||||
|
|
||||||
Ptr<EM> em = EM::create();
|
|
||||||
em->setClustersNumber(nclusters);
|
|
||||||
em->trainEM(samples, noArray(), labels, noArray());
|
|
||||||
|
|
||||||
Mat firstResult(samples.rows, 1, CV_32SC1);
|
|
||||||
for( int i = 0; i < samples.rows; i++)
|
|
||||||
firstResult.at<int>(i) = static_cast<int>(em->predict2(samples.row(i), noArray())[1]);
|
|
||||||
|
|
||||||
// Write out
|
|
||||||
string filename = cv::tempfile(".xml");
|
|
||||||
{
|
|
||||||
FileStorage fs = FileStorage(filename, FileStorage::WRITE);
|
|
||||||
try
|
|
||||||
{
|
|
||||||
fs << "em" << "{";
|
|
||||||
em->write(fs);
|
|
||||||
fs << "}";
|
|
||||||
}
|
|
||||||
catch(...)
|
|
||||||
{
|
|
||||||
ts->printf( cvtest::TS::LOG, "Crash in write method.\n" );
|
|
||||||
ts->set_failed_test_info( cvtest::TS::FAIL_EXCEPTION );
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
em.release();
|
|
||||||
|
|
||||||
// Read in
|
|
||||||
try
|
|
||||||
{
|
|
||||||
em = Algorithm::load<EM>(filename);
|
|
||||||
}
|
|
||||||
catch(...)
|
|
||||||
{
|
|
||||||
ts->printf( cvtest::TS::LOG, "Crash in read method.\n" );
|
|
||||||
ts->set_failed_test_info( cvtest::TS::FAIL_EXCEPTION );
|
|
||||||
}
|
|
||||||
|
|
||||||
remove( filename.c_str() );
|
|
||||||
|
|
||||||
int errCaseCount = 0;
|
|
||||||
for( int i = 0; i < samples.rows; i++)
|
|
||||||
errCaseCount = std::abs(em->predict2(samples.row(i), noArray())[1] - firstResult.at<int>(i)) < FLT_EPSILON ? 0 : 1;
|
|
||||||
|
|
||||||
if( errCaseCount > 0 )
|
|
||||||
{
|
|
||||||
ts->printf( cvtest::TS::LOG, "Different prediction results before writing and after reading (errCaseCount=%d).\n", errCaseCount );
|
|
||||||
code = cvtest::TS::FAIL_BAD_ACCURACY;
|
|
||||||
}
|
|
||||||
|
|
||||||
ts->set_failed_test_info( code );
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
class CV_EMTest_Classification : public cvtest::BaseTest
|
|
||||||
{
|
|
||||||
public:
|
|
||||||
CV_EMTest_Classification() {}
|
|
||||||
protected:
|
|
||||||
virtual void run(int)
|
|
||||||
{
|
|
||||||
// This test classifies spam by the following way:
|
|
||||||
// 1. estimates distributions of "spam" / "not spam"
|
|
||||||
// 2. predict classID using Bayes classifier for estimated distributions.
|
|
||||||
|
|
||||||
string dataFilename = string(ts->get_data_path()) + "spambase.data";
|
|
||||||
Ptr<TrainData> data = TrainData::loadFromCSV(dataFilename, 0);
|
|
||||||
|
|
||||||
if( data.empty() )
|
|
||||||
{
|
|
||||||
ts->printf(cvtest::TS::LOG, "File with spambase dataset can't be read.\n");
|
|
||||||
ts->set_failed_test_info(cvtest::TS::FAIL_INVALID_TEST_DATA);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
Mat samples = data->getSamples();
|
|
||||||
CV_Assert(samples.cols == 57);
|
|
||||||
Mat responses = data->getResponses();
|
|
||||||
|
|
||||||
vector<int> trainSamplesMask(samples.rows, 0);
|
|
||||||
int trainSamplesCount = (int)(0.5f * samples.rows);
|
|
||||||
for(int i = 0; i < trainSamplesCount; i++)
|
|
||||||
trainSamplesMask[i] = 1;
|
|
||||||
RNG rng(0);
|
|
||||||
for(size_t i = 0; i < trainSamplesMask.size(); i++)
|
|
||||||
{
|
|
||||||
int i1 = rng(static_cast<unsigned>(trainSamplesMask.size()));
|
|
||||||
int i2 = rng(static_cast<unsigned>(trainSamplesMask.size()));
|
|
||||||
std::swap(trainSamplesMask[i1], trainSamplesMask[i2]);
|
|
||||||
}
|
|
||||||
|
|
||||||
Mat samples0, samples1;
|
|
||||||
for(int i = 0; i < samples.rows; i++)
|
|
||||||
{
|
|
||||||
if(trainSamplesMask[i])
|
|
||||||
{
|
|
||||||
Mat sample = samples.row(i);
|
|
||||||
int resp = (int)responses.at<float>(i);
|
|
||||||
if(resp == 0)
|
|
||||||
samples0.push_back(sample);
|
|
||||||
else
|
|
||||||
samples1.push_back(sample);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
Ptr<EM> model0 = EM::create();
|
|
||||||
model0->setClustersNumber(3);
|
|
||||||
model0->trainEM(samples0, noArray(), noArray(), noArray());
|
|
||||||
|
|
||||||
Ptr<EM> model1 = EM::create();
|
|
||||||
model1->setClustersNumber(3);
|
|
||||||
model1->trainEM(samples1, noArray(), noArray(), noArray());
|
|
||||||
|
|
||||||
Mat trainConfusionMat(2, 2, CV_32SC1, Scalar(0)),
|
|
||||||
testConfusionMat(2, 2, CV_32SC1, Scalar(0));
|
|
||||||
const double lambda = 1.;
|
|
||||||
for(int i = 0; i < samples.rows; i++)
|
|
||||||
{
|
|
||||||
Mat sample = samples.row(i);
|
|
||||||
double sampleLogLikelihoods0 = model0->predict2(sample, noArray())[0];
|
|
||||||
double sampleLogLikelihoods1 = model1->predict2(sample, noArray())[0];
|
|
||||||
|
|
||||||
int classID = sampleLogLikelihoods0 >= lambda * sampleLogLikelihoods1 ? 0 : 1;
|
|
||||||
|
|
||||||
if(trainSamplesMask[i])
|
|
||||||
trainConfusionMat.at<int>((int)responses.at<float>(i), classID)++;
|
|
||||||
else
|
|
||||||
testConfusionMat.at<int>((int)responses.at<float>(i), classID)++;
|
|
||||||
}
|
|
||||||
// std::cout << trainConfusionMat << std::endl;
|
|
||||||
// std::cout << testConfusionMat << std::endl;
|
|
||||||
|
|
||||||
double trainError = (double)(trainConfusionMat.at<int>(1,0) + trainConfusionMat.at<int>(0,1)) / trainSamplesCount;
|
|
||||||
double testError = (double)(testConfusionMat.at<int>(1,0) + testConfusionMat.at<int>(0,1)) / (samples.rows - trainSamplesCount);
|
|
||||||
const double maxTrainError = 0.23;
|
|
||||||
const double maxTestError = 0.26;
|
|
||||||
|
|
||||||
int code = cvtest::TS::OK;
|
|
||||||
if(trainError > maxTrainError)
|
|
||||||
{
|
|
||||||
ts->printf(cvtest::TS::LOG, "Too large train classification error (calc = %f, valid=%f).\n", trainError, maxTrainError);
|
|
||||||
code = cvtest::TS::FAIL_INVALID_TEST_DATA;
|
|
||||||
}
|
|
||||||
if(testError > maxTestError)
|
|
||||||
{
|
|
||||||
ts->printf(cvtest::TS::LOG, "Too large test classification error (calc = %f, valid=%f).\n", testError, maxTestError);
|
|
||||||
code = cvtest::TS::FAIL_INVALID_TEST_DATA;
|
|
||||||
}
|
|
||||||
|
|
||||||
ts->set_failed_test_info(code);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
TEST(ML_KMeans, accuracy) { CV_KMeansTest test; test.safe_run(); }
|
|
||||||
TEST(ML_KNearest, accuracy) { CV_KNearestTest test; test.safe_run(); }
|
|
||||||
TEST(ML_EM, accuracy) { CV_EMTest test; test.safe_run(); }
|
|
||||||
TEST(ML_EM, save_load) { CV_EMTest_SaveLoad test; test.safe_run(); }
|
|
||||||
TEST(ML_EM, classification) { CV_EMTest_Classification test; test.safe_run(); }
|
|
||||||
|
|
||||||
TEST(ML_KNearest, regression_12347)
|
|
||||||
{
|
|
||||||
Mat xTrainData = (Mat_<float>(5,2) << 1, 1.1, 1.1, 1, 2, 2, 2.1, 2, 2.1, 2.1);
|
|
||||||
Mat yTrainLabels = (Mat_<float>(5,1) << 1, 1, 2, 2, 2);
|
|
||||||
Ptr<KNearest> knn = KNearest::create();
|
|
||||||
knn->train(xTrainData, ml::ROW_SAMPLE, yTrainLabels);
|
|
||||||
|
|
||||||
Mat xTestData = (Mat_<float>(2,2) << 1.1, 1.1, 2, 2.2);
|
|
||||||
Mat zBestLabels, neighbours, dist;
|
|
||||||
// check output shapes:
|
|
||||||
int K = 16, Kexp = std::min(K, xTrainData.rows);
|
|
||||||
knn->findNearest(xTestData, K, zBestLabels, neighbours, dist);
|
|
||||||
EXPECT_EQ(xTestData.rows, zBestLabels.rows);
|
|
||||||
EXPECT_EQ(neighbours.cols, Kexp);
|
|
||||||
EXPECT_EQ(dist.cols, Kexp);
|
|
||||||
// see if the result is still correct:
|
|
||||||
K = 2;
|
|
||||||
knn->findNearest(xTestData, K, zBestLabels, neighbours, dist);
|
|
||||||
EXPECT_EQ(1, zBestLabels.at<float>(0,0));
|
|
||||||
EXPECT_EQ(2, zBestLabels.at<float>(1,0));
|
|
||||||
}
|
|
||||||
|
|
||||||
}} // namespace
|
|
@ -1,286 +0,0 @@
|
|||||||
|
|
||||||
#include "test_precomp.hpp"
|
|
||||||
|
|
||||||
#if 0
|
|
||||||
|
|
||||||
using namespace std;
|
|
||||||
|
|
||||||
|
|
||||||
class CV_GBTreesTest : public cvtest::BaseTest
|
|
||||||
{
|
|
||||||
public:
|
|
||||||
CV_GBTreesTest();
|
|
||||||
~CV_GBTreesTest();
|
|
||||||
|
|
||||||
protected:
|
|
||||||
void run(int);
|
|
||||||
|
|
||||||
int TestTrainPredict(int test_num);
|
|
||||||
int TestSaveLoad();
|
|
||||||
|
|
||||||
int checkPredictError(int test_num);
|
|
||||||
int checkLoadSave();
|
|
||||||
|
|
||||||
string model_file_name1;
|
|
||||||
string model_file_name2;
|
|
||||||
|
|
||||||
string* datasets;
|
|
||||||
string data_path;
|
|
||||||
|
|
||||||
CvMLData* data;
|
|
||||||
CvGBTrees* gtb;
|
|
||||||
|
|
||||||
vector<float> test_resps1;
|
|
||||||
vector<float> test_resps2;
|
|
||||||
|
|
||||||
int64 initSeed;
|
|
||||||
};
|
|
||||||
|
|
||||||
|
|
||||||
int _get_len(const CvMat* mat)
|
|
||||||
{
|
|
||||||
return (mat->cols > mat->rows) ? mat->cols : mat->rows;
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
CV_GBTreesTest::CV_GBTreesTest()
|
|
||||||
{
|
|
||||||
int64 seeds[] = { CV_BIG_INT(0x00009fff4f9c8d52),
|
|
||||||
CV_BIG_INT(0x0000a17166072c7c),
|
|
||||||
CV_BIG_INT(0x0201b32115cd1f9a),
|
|
||||||
CV_BIG_INT(0x0513cb37abcd1234),
|
|
||||||
CV_BIG_INT(0x0001a2b3c4d5f678)
|
|
||||||
};
|
|
||||||
|
|
||||||
int seedCount = sizeof(seeds)/sizeof(seeds[0]);
|
|
||||||
cv::RNG& rng = cv::theRNG();
|
|
||||||
initSeed = rng.state;
|
|
||||||
rng.state = seeds[rng(seedCount)];
|
|
||||||
|
|
||||||
datasets = 0;
|
|
||||||
data = 0;
|
|
||||||
gtb = 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
CV_GBTreesTest::~CV_GBTreesTest()
|
|
||||||
{
|
|
||||||
if (data)
|
|
||||||
delete data;
|
|
||||||
delete[] datasets;
|
|
||||||
cv::theRNG().state = initSeed;
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
int CV_GBTreesTest::TestTrainPredict(int test_num)
|
|
||||||
{
|
|
||||||
int code = cvtest::TS::OK;
|
|
||||||
|
|
||||||
int weak_count = 200;
|
|
||||||
float shrinkage = 0.1f;
|
|
||||||
float subsample_portion = 0.5f;
|
|
||||||
int max_depth = 5;
|
|
||||||
bool use_surrogates = false;
|
|
||||||
int loss_function_type = 0;
|
|
||||||
switch (test_num)
|
|
||||||
{
|
|
||||||
case (1) : loss_function_type = CvGBTrees::SQUARED_LOSS; break;
|
|
||||||
case (2) : loss_function_type = CvGBTrees::ABSOLUTE_LOSS; break;
|
|
||||||
case (3) : loss_function_type = CvGBTrees::HUBER_LOSS; break;
|
|
||||||
case (0) : loss_function_type = CvGBTrees::DEVIANCE_LOSS; break;
|
|
||||||
default :
|
|
||||||
{
|
|
||||||
ts->printf( cvtest::TS::LOG, "Bad test_num value in CV_GBTreesTest::TestTrainPredict(..) function." );
|
|
||||||
return cvtest::TS::FAIL_BAD_ARG_CHECK;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
int dataset_num = test_num == 0 ? 0 : 1;
|
|
||||||
if (!data)
|
|
||||||
{
|
|
||||||
data = new CvMLData();
|
|
||||||
data->set_delimiter(',');
|
|
||||||
|
|
||||||
if (data->read_csv(datasets[dataset_num].c_str()))
|
|
||||||
{
|
|
||||||
ts->printf( cvtest::TS::LOG, "File reading error." );
|
|
||||||
return cvtest::TS::FAIL_INVALID_TEST_DATA;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (test_num == 0)
|
|
||||||
{
|
|
||||||
data->set_response_idx(57);
|
|
||||||
data->set_var_types("ord[0-56],cat[57]");
|
|
||||||
}
|
|
||||||
else
|
|
||||||
{
|
|
||||||
data->set_response_idx(13);
|
|
||||||
data->set_var_types("ord[0-2,4-13],cat[3]");
|
|
||||||
subsample_portion = 0.7f;
|
|
||||||
}
|
|
||||||
|
|
||||||
int train_sample_count = cvFloor(_get_len(data->get_responses())*0.5f);
|
|
||||||
CvTrainTestSplit spl( train_sample_count );
|
|
||||||
data->set_train_test_split( &spl );
|
|
||||||
}
|
|
||||||
|
|
||||||
data->mix_train_and_test_idx();
|
|
||||||
|
|
||||||
|
|
||||||
if (gtb) delete gtb;
|
|
||||||
gtb = new CvGBTrees();
|
|
||||||
bool tmp_code = true;
|
|
||||||
tmp_code = gtb->train(data, CvGBTreesParams(loss_function_type, weak_count,
|
|
||||||
shrinkage, subsample_portion,
|
|
||||||
max_depth, use_surrogates));
|
|
||||||
|
|
||||||
if (!tmp_code)
|
|
||||||
{
|
|
||||||
ts->printf( cvtest::TS::LOG, "Model training was failed.");
|
|
||||||
return cvtest::TS::FAIL_INVALID_OUTPUT;
|
|
||||||
}
|
|
||||||
|
|
||||||
code = checkPredictError(test_num);
|
|
||||||
|
|
||||||
return code;
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
int CV_GBTreesTest::checkPredictError(int test_num)
|
|
||||||
{
|
|
||||||
if (!gtb)
|
|
||||||
return cvtest::TS::FAIL_GENERIC;
|
|
||||||
|
|
||||||
//float mean[] = {5.430247f, 13.5654f, 12.6569f, 13.1661f};
|
|
||||||
//float sigma[] = {0.4162694f, 3.21161f, 3.43297f, 3.00624f};
|
|
||||||
float mean[] = {5.80226f, 12.68689f, 13.49095f, 13.19628f};
|
|
||||||
float sigma[] = {0.4764534f, 3.166919f, 3.022405f, 2.868722f};
|
|
||||||
|
|
||||||
float current_error = gtb->calc_error(data, CV_TEST_ERROR);
|
|
||||||
|
|
||||||
if ( abs( current_error - mean[test_num]) > 6*sigma[test_num] )
|
|
||||||
{
|
|
||||||
ts->printf( cvtest::TS::LOG, "Test error is out of range:\n"
|
|
||||||
"abs(%f/*curEr*/ - %f/*mean*/ > %f/*6*sigma*/",
|
|
||||||
current_error, mean[test_num], 6*sigma[test_num] );
|
|
||||||
return cvtest::TS::FAIL_BAD_ACCURACY;
|
|
||||||
}
|
|
||||||
|
|
||||||
return cvtest::TS::OK;
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
int CV_GBTreesTest::TestSaveLoad()
|
|
||||||
{
|
|
||||||
if (!gtb)
|
|
||||||
return cvtest::TS::FAIL_GENERIC;
|
|
||||||
|
|
||||||
model_file_name1 = cv::tempfile();
|
|
||||||
model_file_name2 = cv::tempfile();
|
|
||||||
|
|
||||||
gtb->save(model_file_name1.c_str());
|
|
||||||
gtb->calc_error(data, CV_TEST_ERROR, &test_resps1);
|
|
||||||
gtb->load(model_file_name1.c_str());
|
|
||||||
gtb->calc_error(data, CV_TEST_ERROR, &test_resps2);
|
|
||||||
gtb->save(model_file_name2.c_str());
|
|
||||||
|
|
||||||
return checkLoadSave();
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
int CV_GBTreesTest::checkLoadSave()
|
|
||||||
{
|
|
||||||
int code = cvtest::TS::OK;
|
|
||||||
|
|
||||||
// 1. compare files
|
|
||||||
ifstream f1( model_file_name1.c_str() ), f2( model_file_name2.c_str() );
|
|
||||||
string s1, s2;
|
|
||||||
int lineIdx = 0;
|
|
||||||
CV_Assert( f1.is_open() && f2.is_open() );
|
|
||||||
for( ; !f1.eof() && !f2.eof(); lineIdx++ )
|
|
||||||
{
|
|
||||||
getline( f1, s1 );
|
|
||||||
getline( f2, s2 );
|
|
||||||
if( s1.compare(s2) )
|
|
||||||
{
|
|
||||||
ts->printf( cvtest::TS::LOG, "first and second saved files differ in %n-line; first %n line: %s; second %n-line: %s",
|
|
||||||
lineIdx, lineIdx, s1.c_str(), lineIdx, s2.c_str() );
|
|
||||||
code = cvtest::TS::FAIL_INVALID_OUTPUT;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if( !f1.eof() || !f2.eof() )
|
|
||||||
{
|
|
||||||
ts->printf( cvtest::TS::LOG, "First and second saved files differ in %n-line; first %n line: %s; second %n-line: %s",
|
|
||||||
lineIdx, lineIdx, s1.c_str(), lineIdx, s2.c_str() );
|
|
||||||
code = cvtest::TS::FAIL_INVALID_OUTPUT;
|
|
||||||
}
|
|
||||||
f1.close();
|
|
||||||
f2.close();
|
|
||||||
// delete temporary files
|
|
||||||
remove( model_file_name1.c_str() );
|
|
||||||
remove( model_file_name2.c_str() );
|
|
||||||
|
|
||||||
// 2. compare responses
|
|
||||||
CV_Assert( test_resps1.size() == test_resps2.size() );
|
|
||||||
vector<float>::const_iterator it1 = test_resps1.begin(), it2 = test_resps2.begin();
|
|
||||||
for( ; it1 != test_resps1.end(); ++it1, ++it2 )
|
|
||||||
{
|
|
||||||
if( fabs(*it1 - *it2) > FLT_EPSILON )
|
|
||||||
{
|
|
||||||
ts->printf( cvtest::TS::LOG, "Responses predicted before saving and after loading are different" );
|
|
||||||
code = cvtest::TS::FAIL_INVALID_OUTPUT;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return code;
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
void CV_GBTreesTest::run(int)
|
|
||||||
{
|
|
||||||
|
|
||||||
string dataPath = string(ts->get_data_path());
|
|
||||||
datasets = new string[2];
|
|
||||||
datasets[0] = dataPath + string("spambase.data"); /*string("dataset_classification.csv");*/
|
|
||||||
datasets[1] = dataPath + string("housing_.data"); /*string("dataset_regression.csv");*/
|
|
||||||
|
|
||||||
int code = cvtest::TS::OK;
|
|
||||||
|
|
||||||
for (int i = 0; i < 4; i++)
|
|
||||||
{
|
|
||||||
|
|
||||||
int temp_code = TestTrainPredict(i);
|
|
||||||
if (temp_code != cvtest::TS::OK)
|
|
||||||
{
|
|
||||||
code = temp_code;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
|
|
||||||
else if (i==0)
|
|
||||||
{
|
|
||||||
temp_code = TestSaveLoad();
|
|
||||||
if (temp_code != cvtest::TS::OK)
|
|
||||||
code = temp_code;
|
|
||||||
delete data;
|
|
||||||
data = 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
delete gtb;
|
|
||||||
gtb = 0;
|
|
||||||
}
|
|
||||||
delete data;
|
|
||||||
data = 0;
|
|
||||||
|
|
||||||
ts->set_failed_test_info( code );
|
|
||||||
}
|
|
||||||
|
|
||||||
/////////////////////////////////////////////////////////////////////////////
|
|
||||||
//////////////////// test registration /////////////////////////////////////
|
|
||||||
/////////////////////////////////////////////////////////////////////////////
|
|
||||||
|
|
||||||
TEST(ML_GBTrees, regression) { CV_GBTreesTest test; test.safe_run(); }
|
|
||||||
|
|
||||||
#endif
|
|
53
modules/ml/test/test_kmeans.cpp
Normal file
53
modules/ml/test/test_kmeans.cpp
Normal file
@ -0,0 +1,53 @@
|
|||||||
|
// This file is part of OpenCV project.
|
||||||
|
// It is subject to the license terms in the LICENSE file found in the top-level directory
|
||||||
|
// of this distribution and at http://opencv.org/license.html.
|
||||||
|
|
||||||
|
#include "test_precomp.hpp"
|
||||||
|
|
||||||
|
namespace opencv_test { namespace {
|
||||||
|
|
||||||
|
TEST(ML_KMeans, accuracy)
|
||||||
|
{
|
||||||
|
const int iters = 100;
|
||||||
|
int sizesArr[] = { 5000, 7000, 8000 };
|
||||||
|
int pointsCount = sizesArr[0]+ sizesArr[1] + sizesArr[2];
|
||||||
|
|
||||||
|
Mat data( pointsCount, 2, CV_32FC1 ), labels;
|
||||||
|
vector<int> sizes( sizesArr, sizesArr + sizeof(sizesArr) / sizeof(sizesArr[0]) );
|
||||||
|
Mat means;
|
||||||
|
vector<Mat> covs;
|
||||||
|
defaultDistribs( means, covs );
|
||||||
|
generateData( data, labels, sizes, means, covs, CV_32FC1, CV_32SC1 );
|
||||||
|
TermCriteria termCriteria( TermCriteria::COUNT, iters, 0.0);
|
||||||
|
|
||||||
|
{
|
||||||
|
SCOPED_TRACE("KMEANS_PP_CENTERS");
|
||||||
|
float err = 1000;
|
||||||
|
Mat bestLabels;
|
||||||
|
kmeans( data, 3, bestLabels, termCriteria, 0, KMEANS_PP_CENTERS, noArray() );
|
||||||
|
EXPECT_TRUE(calcErr( bestLabels, labels, sizes, err , false ));
|
||||||
|
EXPECT_LE(err, 0.01f);
|
||||||
|
}
|
||||||
|
{
|
||||||
|
SCOPED_TRACE("KMEANS_RANDOM_CENTERS");
|
||||||
|
float err = 1000;
|
||||||
|
Mat bestLabels;
|
||||||
|
kmeans( data, 3, bestLabels, termCriteria, 0, KMEANS_RANDOM_CENTERS, noArray() );
|
||||||
|
EXPECT_TRUE(calcErr( bestLabels, labels, sizes, err, false ));
|
||||||
|
EXPECT_LE(err, 0.01f);
|
||||||
|
}
|
||||||
|
{
|
||||||
|
SCOPED_TRACE("KMEANS_USE_INITIAL_LABELS");
|
||||||
|
float err = 1000;
|
||||||
|
Mat bestLabels;
|
||||||
|
labels.copyTo( bestLabels );
|
||||||
|
RNG &rng = cv::theRNG();
|
||||||
|
for( int i = 0; i < 0.5f * pointsCount; i++ )
|
||||||
|
bestLabels.at<int>( rng.next() % pointsCount, 0 ) = rng.next() % 3;
|
||||||
|
kmeans( data, 3, bestLabels, termCriteria, 0, KMEANS_USE_INITIAL_LABELS, noArray() );
|
||||||
|
EXPECT_TRUE(calcErr( bestLabels, labels, sizes, err, false ));
|
||||||
|
EXPECT_LE(err, 0.01f);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
}} // namespace
|
77
modules/ml/test/test_knearest.cpp
Normal file
77
modules/ml/test/test_knearest.cpp
Normal file
@ -0,0 +1,77 @@
|
|||||||
|
// This file is part of OpenCV project.
|
||||||
|
// It is subject to the license terms in the LICENSE file found in the top-level directory
|
||||||
|
// of this distribution and at http://opencv.org/license.html.
|
||||||
|
|
||||||
|
#include "test_precomp.hpp"
|
||||||
|
|
||||||
|
namespace opencv_test { namespace {
|
||||||
|
|
||||||
|
using cv::ml::TrainData;
|
||||||
|
using cv::ml::EM;
|
||||||
|
using cv::ml::KNearest;
|
||||||
|
|
||||||
|
TEST(ML_KNearest, accuracy)
|
||||||
|
{
|
||||||
|
int sizesArr[] = { 500, 700, 800 };
|
||||||
|
int pointsCount = sizesArr[0]+ sizesArr[1] + sizesArr[2];
|
||||||
|
|
||||||
|
Mat trainData( pointsCount, 2, CV_32FC1 ), trainLabels;
|
||||||
|
vector<int> sizes( sizesArr, sizesArr + sizeof(sizesArr) / sizeof(sizesArr[0]) );
|
||||||
|
Mat means;
|
||||||
|
vector<Mat> covs;
|
||||||
|
defaultDistribs( means, covs );
|
||||||
|
generateData( trainData, trainLabels, sizes, means, covs, CV_32FC1, CV_32FC1 );
|
||||||
|
|
||||||
|
Mat testData( pointsCount, 2, CV_32FC1 );
|
||||||
|
Mat testLabels;
|
||||||
|
generateData( testData, testLabels, sizes, means, covs, CV_32FC1, CV_32FC1 );
|
||||||
|
|
||||||
|
{
|
||||||
|
SCOPED_TRACE("Default");
|
||||||
|
Mat bestLabels;
|
||||||
|
float err = 1000;
|
||||||
|
Ptr<KNearest> knn = KNearest::create();
|
||||||
|
knn->train(trainData, ml::ROW_SAMPLE, trainLabels);
|
||||||
|
knn->findNearest(testData, 4, bestLabels);
|
||||||
|
EXPECT_TRUE(calcErr( bestLabels, testLabels, sizes, err, true ));
|
||||||
|
EXPECT_LE(err, 0.01f);
|
||||||
|
}
|
||||||
|
{
|
||||||
|
// TODO: broken
|
||||||
|
#if 0
|
||||||
|
SCOPED_TRACE("KDTree");
|
||||||
|
Mat bestLabels;
|
||||||
|
float err = 1000;
|
||||||
|
Ptr<KNearest> knn = KNearest::create();
|
||||||
|
knn->setAlgorithmType(KNearest::KDTREE);
|
||||||
|
knn->train(trainData, ml::ROW_SAMPLE, trainLabels);
|
||||||
|
knn->findNearest(testData, 4, bestLabels);
|
||||||
|
EXPECT_TRUE(calcErr( bestLabels, testLabels, sizes, err, true ));
|
||||||
|
EXPECT_LE(err, 0.01f);
|
||||||
|
#endif
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
TEST(ML_KNearest, regression_12347)
|
||||||
|
{
|
||||||
|
Mat xTrainData = (Mat_<float>(5,2) << 1, 1.1, 1.1, 1, 2, 2, 2.1, 2, 2.1, 2.1);
|
||||||
|
Mat yTrainLabels = (Mat_<float>(5,1) << 1, 1, 2, 2, 2);
|
||||||
|
Ptr<KNearest> knn = KNearest::create();
|
||||||
|
knn->train(xTrainData, ml::ROW_SAMPLE, yTrainLabels);
|
||||||
|
|
||||||
|
Mat xTestData = (Mat_<float>(2,2) << 1.1, 1.1, 2, 2.2);
|
||||||
|
Mat zBestLabels, neighbours, dist;
|
||||||
|
// check output shapes:
|
||||||
|
int K = 16, Kexp = std::min(K, xTrainData.rows);
|
||||||
|
knn->findNearest(xTestData, K, zBestLabels, neighbours, dist);
|
||||||
|
EXPECT_EQ(xTestData.rows, zBestLabels.rows);
|
||||||
|
EXPECT_EQ(neighbours.cols, Kexp);
|
||||||
|
EXPECT_EQ(dist.cols, Kexp);
|
||||||
|
// see if the result is still correct:
|
||||||
|
K = 2;
|
||||||
|
knn->findNearest(xTestData, K, zBestLabels, neighbours, dist);
|
||||||
|
EXPECT_EQ(1, zBestLabels.at<float>(0,0));
|
||||||
|
EXPECT_EQ(2, zBestLabels.at<float>(1,0));
|
||||||
|
}
|
||||||
|
|
||||||
|
}} // namespace
|
@ -1,9 +1,6 @@
|
|||||||
///////////////////////////////////////////////////////////////////////////////////////
|
// This file is part of OpenCV project.
|
||||||
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
|
// It is subject to the license terms in the LICENSE file found in the top-level directory
|
||||||
|
// of this distribution and at http://opencv.org/license.html.
|
||||||
// By downloading, copying, installing or using the software you agree to this license.
|
|
||||||
// If you do not agree to this license, do not download, install,
|
|
||||||
// copy or use the software.
|
|
||||||
|
|
||||||
// This is a implementation of the Logistic Regression algorithm in C++ in OpenCV.
|
// This is a implementation of the Logistic Regression algorithm in C++ in OpenCV.
|
||||||
|
|
||||||
@ -11,92 +8,16 @@
|
|||||||
// Rahul Kavi rahulkavi[at]live[at]com
|
// Rahul Kavi rahulkavi[at]live[at]com
|
||||||
//
|
//
|
||||||
|
|
||||||
// contains a subset of data from the popular Iris Dataset (taken from "http://archive.ics.uci.edu/ml/datasets/Iris")
|
|
||||||
|
|
||||||
// # You are free to use, change, or redistribute the code in any way you wish for
|
|
||||||
// # non-commercial purposes, but please maintain the name of the original author.
|
|
||||||
// # This code comes with no warranty of any kind.
|
|
||||||
|
|
||||||
// #
|
|
||||||
// # You are free to use, change, or redistribute the code in any way you wish for
|
|
||||||
// # non-commercial purposes, but please maintain the name of the original author.
|
|
||||||
// # This code comes with no warranty of any kind.
|
|
||||||
|
|
||||||
// # Logistic Regression ALGORITHM
|
|
||||||
|
|
||||||
|
|
||||||
// License Agreement
|
|
||||||
// For Open Source Computer Vision Library
|
|
||||||
|
|
||||||
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
|
|
||||||
// Copyright (C) 2008-2011, Willow Garage Inc., all rights reserved.
|
|
||||||
// Third party copyrights are property of their respective owners.
|
|
||||||
|
|
||||||
// Redistribution and use in source and binary forms, with or without modification,
|
|
||||||
// are permitted provided that the following conditions are met:
|
|
||||||
|
|
||||||
// * Redistributions of source code must retain the above copyright notice,
|
|
||||||
// this list of conditions and the following disclaimer.
|
|
||||||
|
|
||||||
// * Redistributions in binary form must reproduce the above copyright notice,
|
|
||||||
// this list of conditions and the following disclaimer in the documentation
|
|
||||||
// and/or other materials provided with the distribution.
|
|
||||||
|
|
||||||
// * The name of the copyright holders may not be used to endorse or promote products
|
|
||||||
// derived from this software without specific prior written permission.
|
|
||||||
|
|
||||||
// This software is provided by the copyright holders and contributors "as is" and
|
|
||||||
// any express or implied warranties, including, but not limited to, the implied
|
|
||||||
// warranties of merchantability and fitness for a particular purpose are disclaimed.
|
|
||||||
// In no event shall the Intel Corporation or contributors be liable for any direct,
|
|
||||||
// indirect, incidental, special, exemplary, or consequential damages
|
|
||||||
// (including, but not limited to, procurement of substitute goods or services;
|
|
||||||
// loss of use, data, or profits; or business interruption) however caused
|
|
||||||
// and on any theory of liability, whether in contract, strict liability,
|
|
||||||
// or tort (including negligence or otherwise) arising in any way out of
|
|
||||||
// the use of this software, even if advised of the possibility of such damage.
|
|
||||||
|
|
||||||
#include "test_precomp.hpp"
|
#include "test_precomp.hpp"
|
||||||
|
|
||||||
namespace opencv_test { namespace {
|
namespace opencv_test { namespace {
|
||||||
|
|
||||||
bool calculateError( const Mat& _p_labels, const Mat& _o_labels, float& error)
|
TEST(ML_LR, accuracy)
|
||||||
{
|
{
|
||||||
CV_TRACE_FUNCTION();
|
std::string dataFileName = findDataFile("iris.data");
|
||||||
error = 0.0f;
|
|
||||||
float accuracy = 0.0f;
|
|
||||||
Mat _p_labels_temp;
|
|
||||||
Mat _o_labels_temp;
|
|
||||||
_p_labels.convertTo(_p_labels_temp, CV_32S);
|
|
||||||
_o_labels.convertTo(_o_labels_temp, CV_32S);
|
|
||||||
|
|
||||||
CV_Assert(_p_labels_temp.total() == _o_labels_temp.total());
|
|
||||||
CV_Assert(_p_labels_temp.rows == _o_labels_temp.rows);
|
|
||||||
|
|
||||||
accuracy = (float)countNonZero(_p_labels_temp == _o_labels_temp)/_p_labels_temp.rows;
|
|
||||||
error = 1 - accuracy;
|
|
||||||
return true;
|
|
||||||
}
|
|
||||||
|
|
||||||
//--------------------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
class CV_LRTest : public cvtest::BaseTest
|
|
||||||
{
|
|
||||||
public:
|
|
||||||
CV_LRTest() {}
|
|
||||||
protected:
|
|
||||||
virtual void run( int start_from );
|
|
||||||
};
|
|
||||||
|
|
||||||
void CV_LRTest::run( int /*start_from*/ )
|
|
||||||
{
|
|
||||||
CV_TRACE_FUNCTION();
|
|
||||||
// initialize variables from the popular Iris Dataset
|
|
||||||
string dataFileName = ts->get_data_path() + "iris.data";
|
|
||||||
Ptr<TrainData> tdata = TrainData::loadFromCSV(dataFileName, 0);
|
Ptr<TrainData> tdata = TrainData::loadFromCSV(dataFileName, 0);
|
||||||
ASSERT_FALSE(tdata.empty()) << "Could not find test data file : " << dataFileName;
|
ASSERT_FALSE(tdata.empty());
|
||||||
|
|
||||||
// run LR classifier train classifier
|
|
||||||
Ptr<LogisticRegression> p = LogisticRegression::create();
|
Ptr<LogisticRegression> p = LogisticRegression::create();
|
||||||
p->setLearningRate(1.0);
|
p->setLearningRate(1.0);
|
||||||
p->setIterations(10001);
|
p->setIterations(10001);
|
||||||
@ -105,121 +26,54 @@ void CV_LRTest::run( int /*start_from*/ )
|
|||||||
p->setMiniBatchSize(10);
|
p->setMiniBatchSize(10);
|
||||||
p->train(tdata);
|
p->train(tdata);
|
||||||
|
|
||||||
// predict using the same data
|
|
||||||
Mat responses;
|
Mat responses;
|
||||||
p->predict(tdata->getSamples(), responses);
|
p->predict(tdata->getSamples(), responses);
|
||||||
|
|
||||||
// calculate error
|
float error = 1000;
|
||||||
int test_code = cvtest::TS::OK;
|
EXPECT_TRUE(calculateError(responses, tdata->getResponses(), error));
|
||||||
float error = 0.0f;
|
EXPECT_LE(error, 0.05f);
|
||||||
if(!calculateError(responses, tdata->getResponses(), error))
|
|
||||||
{
|
|
||||||
ts->printf(cvtest::TS::LOG, "Bad prediction labels\n" );
|
|
||||||
test_code = cvtest::TS::FAIL_INVALID_OUTPUT;
|
|
||||||
}
|
|
||||||
else if(error > 0.05f)
|
|
||||||
{
|
|
||||||
ts->printf(cvtest::TS::LOG, "Bad accuracy of (%f)\n", error);
|
|
||||||
test_code = cvtest::TS::FAIL_BAD_ACCURACY;
|
|
||||||
}
|
|
||||||
|
|
||||||
{
|
|
||||||
FileStorage s("debug.xml", FileStorage::WRITE);
|
|
||||||
s << "original" << tdata->getResponses();
|
|
||||||
s << "predicted1" << responses;
|
|
||||||
s << "learnt" << p->get_learnt_thetas();
|
|
||||||
s << "error" << error;
|
|
||||||
s.release();
|
|
||||||
}
|
|
||||||
ts->set_failed_test_info(test_code);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
//--------------------------------------------------------------------------------------------
|
//==================================================================================================
|
||||||
class CV_LRTest_SaveLoad : public cvtest::BaseTest
|
|
||||||
|
TEST(ML_LR, save_load)
|
||||||
{
|
{
|
||||||
public:
|
string dataFileName = findDataFile("iris.data");
|
||||||
CV_LRTest_SaveLoad(){}
|
|
||||||
protected:
|
|
||||||
virtual void run(int start_from);
|
|
||||||
};
|
|
||||||
|
|
||||||
|
|
||||||
void CV_LRTest_SaveLoad::run( int /*start_from*/ )
|
|
||||||
{
|
|
||||||
CV_TRACE_FUNCTION();
|
|
||||||
int code = cvtest::TS::OK;
|
|
||||||
|
|
||||||
// initialize variables from the popular Iris Dataset
|
|
||||||
string dataFileName = ts->get_data_path() + "iris.data";
|
|
||||||
Ptr<TrainData> tdata = TrainData::loadFromCSV(dataFileName, 0);
|
Ptr<TrainData> tdata = TrainData::loadFromCSV(dataFileName, 0);
|
||||||
ASSERT_FALSE(tdata.empty()) << "Could not find test data file : " << dataFileName;
|
ASSERT_FALSE(tdata.empty());
|
||||||
|
|
||||||
Mat responses1, responses2;
|
Mat responses1, responses2;
|
||||||
Mat learnt_mat1, learnt_mat2;
|
Mat learnt_mat1, learnt_mat2;
|
||||||
|
|
||||||
// train and save the classifier
|
|
||||||
String filename = tempfile(".xml");
|
String filename = tempfile(".xml");
|
||||||
try
|
|
||||||
{
|
{
|
||||||
// run LR classifier train classifier
|
|
||||||
Ptr<LogisticRegression> lr1 = LogisticRegression::create();
|
Ptr<LogisticRegression> lr1 = LogisticRegression::create();
|
||||||
lr1->setLearningRate(1.0);
|
lr1->setLearningRate(1.0);
|
||||||
lr1->setIterations(10001);
|
lr1->setIterations(10001);
|
||||||
lr1->setRegularization(LogisticRegression::REG_L2);
|
lr1->setRegularization(LogisticRegression::REG_L2);
|
||||||
lr1->setTrainMethod(LogisticRegression::BATCH);
|
lr1->setTrainMethod(LogisticRegression::BATCH);
|
||||||
lr1->setMiniBatchSize(10);
|
lr1->setMiniBatchSize(10);
|
||||||
lr1->train(tdata);
|
ASSERT_NO_THROW(lr1->train(tdata));
|
||||||
lr1->predict(tdata->getSamples(), responses1);
|
ASSERT_NO_THROW(lr1->predict(tdata->getSamples(), responses1));
|
||||||
|
ASSERT_NO_THROW(lr1->save(filename));
|
||||||
learnt_mat1 = lr1->get_learnt_thetas();
|
learnt_mat1 = lr1->get_learnt_thetas();
|
||||||
lr1->save(filename);
|
|
||||||
}
|
}
|
||||||
catch(...)
|
|
||||||
{
|
{
|
||||||
ts->printf(cvtest::TS::LOG, "Crash in write method.\n" );
|
Ptr<LogisticRegression> lr2;
|
||||||
ts->set_failed_test_info(cvtest::TS::FAIL_EXCEPTION);
|
ASSERT_NO_THROW(lr2 = Algorithm::load<LogisticRegression>(filename));
|
||||||
}
|
ASSERT_NO_THROW(lr2->predict(tdata->getSamples(), responses2));
|
||||||
|
|
||||||
// and load to another
|
|
||||||
try
|
|
||||||
{
|
|
||||||
Ptr<LogisticRegression> lr2 = Algorithm::load<LogisticRegression>(filename);
|
|
||||||
lr2->predict(tdata->getSamples(), responses2);
|
|
||||||
learnt_mat2 = lr2->get_learnt_thetas();
|
learnt_mat2 = lr2->get_learnt_thetas();
|
||||||
}
|
}
|
||||||
catch(...)
|
// compare difference in prediction outputs and stored inputs
|
||||||
{
|
EXPECT_MAT_NEAR(responses1, responses2, 0.f);
|
||||||
ts->printf(cvtest::TS::LOG, "Crash in write method.\n" );
|
|
||||||
ts->set_failed_test_info(cvtest::TS::FAIL_EXCEPTION);
|
|
||||||
}
|
|
||||||
|
|
||||||
CV_Assert(responses1.rows == responses2.rows);
|
|
||||||
|
|
||||||
// compare difference in learnt matrices before and after loading from disk
|
|
||||||
Mat comp_learnt_mats;
|
Mat comp_learnt_mats;
|
||||||
comp_learnt_mats = (learnt_mat1 == learnt_mat2);
|
comp_learnt_mats = (learnt_mat1 == learnt_mat2);
|
||||||
comp_learnt_mats = comp_learnt_mats.reshape(1, comp_learnt_mats.rows*comp_learnt_mats.cols);
|
comp_learnt_mats = comp_learnt_mats.reshape(1, comp_learnt_mats.rows*comp_learnt_mats.cols);
|
||||||
comp_learnt_mats.convertTo(comp_learnt_mats, CV_32S);
|
comp_learnt_mats.convertTo(comp_learnt_mats, CV_32S);
|
||||||
comp_learnt_mats = comp_learnt_mats/255;
|
comp_learnt_mats = comp_learnt_mats/255;
|
||||||
|
|
||||||
// compare difference in prediction outputs and stored inputs
|
|
||||||
// check if there is any difference between computed learnt mat and retrieved mat
|
// check if there is any difference between computed learnt mat and retrieved mat
|
||||||
|
EXPECT_EQ(comp_learnt_mats.rows, sum(comp_learnt_mats)[0]);
|
||||||
float errorCount = 0.0;
|
|
||||||
errorCount += 1 - (float)countNonZero(responses1 == responses2)/responses1.rows;
|
|
||||||
errorCount += 1 - (float)sum(comp_learnt_mats)[0]/comp_learnt_mats.rows;
|
|
||||||
|
|
||||||
if(errorCount>0)
|
|
||||||
{
|
|
||||||
ts->printf( cvtest::TS::LOG, "Different prediction results before writing and after reading (errorCount=%d).\n", errorCount );
|
|
||||||
code = cvtest::TS::FAIL_BAD_ACCURACY;
|
|
||||||
}
|
|
||||||
|
|
||||||
remove( filename.c_str() );
|
remove( filename.c_str() );
|
||||||
|
|
||||||
ts->set_failed_test_info( code );
|
|
||||||
}
|
}
|
||||||
|
|
||||||
TEST(ML_LR, accuracy) { CV_LRTest test; test.safe_run(); }
|
|
||||||
TEST(ML_LR, save_load) { CV_LRTest_SaveLoad test; test.safe_run(); }
|
|
||||||
|
|
||||||
}} // namespace
|
}} // namespace
|
||||||
|
@ -1,224 +1,373 @@
|
|||||||
/*M///////////////////////////////////////////////////////////////////////////////////////
|
// This file is part of OpenCV project.
|
||||||
//
|
// It is subject to the license terms in the LICENSE file found in the top-level directory
|
||||||
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
|
// of this distribution and at http://opencv.org/license.html.
|
||||||
//
|
|
||||||
// By downloading, copying, installing or using the software you agree to this license.
|
|
||||||
// If you do not agree to this license, do not download, install,
|
|
||||||
// copy or use the software.
|
|
||||||
//
|
|
||||||
//
|
|
||||||
// Intel License Agreement
|
|
||||||
// For Open Source Computer Vision Library
|
|
||||||
//
|
|
||||||
// Copyright (C) 2000, Intel Corporation, all rights reserved.
|
|
||||||
// Third party copyrights are property of their respective owners.
|
|
||||||
//
|
|
||||||
// Redistribution and use in source and binary forms, with or without modification,
|
|
||||||
// are permitted provided that the following conditions are met:
|
|
||||||
//
|
|
||||||
// * Redistribution's of source code must retain the above copyright notice,
|
|
||||||
// this list of conditions and the following disclaimer.
|
|
||||||
//
|
|
||||||
// * Redistribution's in binary form must reproduce the above copyright notice,
|
|
||||||
// this list of conditions and the following disclaimer in the documentation
|
|
||||||
// and/or other materials provided with the distribution.
|
|
||||||
//
|
|
||||||
// * The name of Intel Corporation may not be used to endorse or promote products
|
|
||||||
// derived from this software without specific prior written permission.
|
|
||||||
//
|
|
||||||
// This software is provided by the copyright holders and contributors "as is" and
|
|
||||||
// any express or implied warranties, including, but not limited to, the implied
|
|
||||||
// warranties of merchantability and fitness for a particular purpose are disclaimed.
|
|
||||||
// In no event shall the Intel Corporation or contributors be liable for any direct,
|
|
||||||
// indirect, incidental, special, exemplary, or consequential damages
|
|
||||||
// (including, but not limited to, procurement of substitute goods or services;
|
|
||||||
// loss of use, data, or profits; or business interruption) however caused
|
|
||||||
// and on any theory of liability, whether in contract, strict liability,
|
|
||||||
// or tort (including negligence or otherwise) arising in any way out of
|
|
||||||
// the use of this software, even if advised of the possibility of such damage.
|
|
||||||
//
|
|
||||||
//M*/
|
|
||||||
|
|
||||||
#include "test_precomp.hpp"
|
#include "test_precomp.hpp"
|
||||||
|
|
||||||
namespace opencv_test {
|
namespace opencv_test { namespace {
|
||||||
|
|
||||||
CV_AMLTest::CV_AMLTest( const char* _modelName ) : CV_MLBaseTest( _modelName )
|
struct DatasetDesc
|
||||||
{
|
{
|
||||||
validationFN = "avalidation.xml";
|
string name;
|
||||||
|
int resp_idx;
|
||||||
|
int train_count;
|
||||||
|
int cat_num;
|
||||||
|
string type_desc;
|
||||||
|
public:
|
||||||
|
Ptr<TrainData> load()
|
||||||
|
{
|
||||||
|
string filename = findDataFile(name + ".data");
|
||||||
|
Ptr<TrainData> data = TrainData::loadFromCSV(filename, 0, resp_idx, resp_idx + 1, type_desc);
|
||||||
|
data->setTrainTestSplit(train_count);
|
||||||
|
data->shuffleTrainTest();
|
||||||
|
return data;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// see testdata/ml/protocol.txt (?)
|
||||||
|
DatasetDesc datasets[] = {
|
||||||
|
{ "mushroom", 0, 4000, 16, "cat" },
|
||||||
|
{ "adult", 14, 22561, 16, "ord[0,2,4,10-12],cat[1,3,5-9,13,14]" },
|
||||||
|
{ "vehicle", 18, 761, 4, "ord[0-17],cat[18]" },
|
||||||
|
{ "abalone", 8, 3133, 16, "ord[1-8],cat[0]" },
|
||||||
|
{ "ringnorm", 20, 300, 2, "ord[0-19],cat[20]" },
|
||||||
|
{ "spambase", 57, 3221, 3, "ord[0-56],cat[57]" },
|
||||||
|
{ "waveform", 21, 300, 3, "ord[0-20],cat[21]" },
|
||||||
|
{ "elevators", 18, 5000, 0, "ord" },
|
||||||
|
{ "letter", 16, 10000, 26, "ord[0-15],cat[16]" },
|
||||||
|
{ "twonorm", 20, 300, 3, "ord[0-19],cat[20]" },
|
||||||
|
{ "poletelecomm", 48, 2500, 0, "ord" },
|
||||||
|
};
|
||||||
|
|
||||||
|
static DatasetDesc & getDataset(const string & name)
|
||||||
|
{
|
||||||
|
const int sz = sizeof(datasets)/sizeof(datasets[0]);
|
||||||
|
for (int i = 0; i < sz; ++i)
|
||||||
|
{
|
||||||
|
DatasetDesc & desc = datasets[i];
|
||||||
|
if (desc.name == name)
|
||||||
|
return desc;
|
||||||
|
}
|
||||||
|
CV_Error(Error::StsInternal, "");
|
||||||
}
|
}
|
||||||
|
|
||||||
int CV_AMLTest::run_test_case( int testCaseIdx )
|
//==================================================================================================
|
||||||
|
|
||||||
|
// interfaces and templates
|
||||||
|
|
||||||
|
template <typename T> string modelName() { return "Unknown"; };
|
||||||
|
template <typename T> Ptr<T> tuneModel(const DatasetDesc &, Ptr<T> m) { return m; }
|
||||||
|
|
||||||
|
struct IModelFactory
|
||||||
{
|
{
|
||||||
CV_TRACE_FUNCTION();
|
virtual Ptr<StatModel> createNew(const DatasetDesc &dataset) const = 0;
|
||||||
int code = cvtest::TS::OK;
|
virtual Ptr<StatModel> loadFromFile(const string &filename) const = 0;
|
||||||
code = prepare_test_case( testCaseIdx );
|
virtual string name() const = 0;
|
||||||
|
virtual ~IModelFactory() {}
|
||||||
|
};
|
||||||
|
|
||||||
if (code == cvtest::TS::OK)
|
template <typename T>
|
||||||
|
struct ModelFactory : public IModelFactory
|
||||||
|
{
|
||||||
|
Ptr<StatModel> createNew(const DatasetDesc &dataset) const CV_OVERRIDE
|
||||||
{
|
{
|
||||||
//#define GET_STAT
|
return tuneModel<T>(dataset, T::create());
|
||||||
#ifdef GET_STAT
|
|
||||||
const char* data_name = ((CvFileNode*)cvGetSeqElem( dataSetNames, testCaseIdx ))->data.str.ptr;
|
|
||||||
printf("%s, %s ", name, data_name);
|
|
||||||
const int icount = 100;
|
|
||||||
float res[icount];
|
|
||||||
for (int k = 0; k < icount; k++)
|
|
||||||
{
|
|
||||||
#endif
|
|
||||||
data->shuffleTrainTest();
|
|
||||||
code = train( testCaseIdx );
|
|
||||||
#ifdef GET_STAT
|
|
||||||
float case_result = get_error();
|
|
||||||
|
|
||||||
res[k] = case_result;
|
|
||||||
}
|
|
||||||
float mean = 0, sigma = 0;
|
|
||||||
for (int k = 0; k < icount; k++)
|
|
||||||
{
|
|
||||||
mean += res[k];
|
|
||||||
}
|
|
||||||
mean = mean /icount;
|
|
||||||
for (int k = 0; k < icount; k++)
|
|
||||||
{
|
|
||||||
sigma += (res[k] - mean)*(res[k] - mean);
|
|
||||||
}
|
|
||||||
sigma = sqrt(sigma/icount);
|
|
||||||
printf("%f, %f\n", mean, sigma);
|
|
||||||
#endif
|
|
||||||
}
|
}
|
||||||
return code;
|
Ptr<StatModel> loadFromFile(const string & filename) const CV_OVERRIDE
|
||||||
|
{
|
||||||
|
return T::load(filename);
|
||||||
|
}
|
||||||
|
string name() const CV_OVERRIDE { return modelName<T>(); }
|
||||||
|
};
|
||||||
|
|
||||||
|
// implementation
|
||||||
|
|
||||||
|
template <> string modelName<NormalBayesClassifier>() { return "NormalBayesClassifier"; }
|
||||||
|
template <> string modelName<DTrees>() { return "DTrees"; }
|
||||||
|
template <> string modelName<KNearest>() { return "KNearest"; }
|
||||||
|
template <> string modelName<RTrees>() { return "RTrees"; }
|
||||||
|
template <> string modelName<SVMSGD>() { return "SVMSGD"; }
|
||||||
|
|
||||||
|
template<> Ptr<DTrees> tuneModel<DTrees>(const DatasetDesc &dataset, Ptr<DTrees> m)
|
||||||
|
{
|
||||||
|
m->setMaxDepth(10);
|
||||||
|
m->setMinSampleCount(2);
|
||||||
|
m->setRegressionAccuracy(0);
|
||||||
|
m->setUseSurrogates(false);
|
||||||
|
m->setCVFolds(0);
|
||||||
|
m->setUse1SERule(false);
|
||||||
|
m->setTruncatePrunedTree(false);
|
||||||
|
m->setPriors(Mat());
|
||||||
|
m->setMaxCategories(dataset.cat_num);
|
||||||
|
return m;
|
||||||
}
|
}
|
||||||
|
|
||||||
int CV_AMLTest::validate_test_results( int testCaseIdx )
|
template<> Ptr<RTrees> tuneModel<RTrees>(const DatasetDesc &dataset, Ptr<RTrees> m)
|
||||||
{
|
{
|
||||||
CV_TRACE_FUNCTION();
|
m->setMaxDepth(20);
|
||||||
int iters;
|
m->setMinSampleCount(2);
|
||||||
float mean, sigma;
|
m->setRegressionAccuracy(0);
|
||||||
// read validation params
|
m->setUseSurrogates(false);
|
||||||
FileNode resultNode =
|
m->setPriors(Mat());
|
||||||
validationFS.getFirstTopLevelNode()["validation"][modelName][dataSetNames[testCaseIdx]]["result"];
|
m->setCalculateVarImportance(true);
|
||||||
resultNode["iter_count"] >> iters;
|
m->setActiveVarCount(0);
|
||||||
if ( iters > 0)
|
m->setTermCriteria(TermCriteria(TermCriteria::COUNT, 100, 0.0));
|
||||||
{
|
m->setMaxCategories(dataset.cat_num);
|
||||||
resultNode["mean"] >> mean;
|
return m;
|
||||||
resultNode["sigma"] >> sigma;
|
|
||||||
model->save(format("/Users/vp/tmp/dtree/testcase_%02d.cur.yml", testCaseIdx));
|
|
||||||
float curErr = get_test_error( testCaseIdx );
|
|
||||||
const int coeff = 4;
|
|
||||||
ts->printf( cvtest::TS::LOG, "Test case = %d; test error = %f; mean error = %f (diff=%f), %d*sigma = %f\n",
|
|
||||||
testCaseIdx, curErr, mean, abs( curErr - mean), coeff, coeff*sigma );
|
|
||||||
if ( abs( curErr - mean) > coeff*sigma )
|
|
||||||
{
|
|
||||||
ts->printf( cvtest::TS::LOG, "abs(%f - %f) > %f - OUT OF RANGE!\n", curErr, mean, coeff*sigma, coeff );
|
|
||||||
return cvtest::TS::FAIL_BAD_ACCURACY;
|
|
||||||
}
|
|
||||||
else
|
|
||||||
ts->printf( cvtest::TS::LOG, ".\n" );
|
|
||||||
|
|
||||||
}
|
|
||||||
else
|
|
||||||
{
|
|
||||||
ts->printf( cvtest::TS::LOG, "validation info is not suitable" );
|
|
||||||
return cvtest::TS::FAIL_INVALID_TEST_DATA;
|
|
||||||
}
|
|
||||||
return cvtest::TS::OK;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
namespace {
|
template<> Ptr<SVMSGD> tuneModel<SVMSGD>(const DatasetDesc &, Ptr<SVMSGD> m)
|
||||||
|
|
||||||
TEST(ML_DTree, regression) { CV_AMLTest test( CV_DTREE ); test.safe_run(); }
|
|
||||||
TEST(ML_Boost, regression) { CV_AMLTest test( CV_BOOST ); test.safe_run(); }
|
|
||||||
TEST(ML_RTrees, regression) { CV_AMLTest test( CV_RTREES ); test.safe_run(); }
|
|
||||||
TEST(DISABLED_ML_ERTrees, regression) { CV_AMLTest test( CV_ERTREES ); test.safe_run(); }
|
|
||||||
|
|
||||||
TEST(ML_NBAYES, regression_5911)
|
|
||||||
{
|
{
|
||||||
int N=12;
|
m->setSvmsgdType(SVMSGD::ASGD);
|
||||||
Ptr<ml::NormalBayesClassifier> nb = cv::ml::NormalBayesClassifier::create();
|
m->setMarginType(SVMSGD::SOFT_MARGIN);
|
||||||
|
m->setMarginRegularization(0.00001f);
|
||||||
// data:
|
m->setInitialStepSize(0.1f);
|
||||||
Mat_<float> X(N,4);
|
m->setStepDecreasingPower(0.75);
|
||||||
X << 1,2,3,4, 1,2,3,4, 1,2,3,4, 1,2,3,4,
|
m->setTermCriteria(TermCriteria(TermCriteria::COUNT + TermCriteria::EPS, 10000, 0.00001));
|
||||||
5,5,5,5, 5,5,5,5, 5,5,5,5, 5,5,5,5,
|
return m;
|
||||||
4,3,2,1, 4,3,2,1, 4,3,2,1, 4,3,2,1;
|
|
||||||
|
|
||||||
// labels:
|
|
||||||
Mat_<int> Y(N,1);
|
|
||||||
Y << 0,0,0,0, 1,1,1,1, 2,2,2,2;
|
|
||||||
nb->train(X, ml::ROW_SAMPLE, Y);
|
|
||||||
|
|
||||||
// single prediction:
|
|
||||||
Mat R1,P1;
|
|
||||||
for (int i=0; i<N; i++)
|
|
||||||
{
|
|
||||||
Mat r,p;
|
|
||||||
nb->predictProb(X.row(i), r, p);
|
|
||||||
R1.push_back(r);
|
|
||||||
P1.push_back(p);
|
|
||||||
}
|
|
||||||
|
|
||||||
// bulk prediction (continuous memory):
|
|
||||||
Mat R2,P2;
|
|
||||||
nb->predictProb(X, R2, P2);
|
|
||||||
|
|
||||||
EXPECT_EQ(sum(R1 == R2)[0], 255 * R2.total());
|
|
||||||
EXPECT_EQ(sum(P1 == P2)[0], 255 * P2.total());
|
|
||||||
|
|
||||||
// bulk prediction, with non-continuous memory storage
|
|
||||||
Mat R3_(N, 1+1, CV_32S),
|
|
||||||
P3_(N, 3+1, CV_32F);
|
|
||||||
nb->predictProb(X, R3_.col(0), P3_.colRange(0,3));
|
|
||||||
Mat R3 = R3_.col(0).clone(),
|
|
||||||
P3 = P3_.colRange(0,3).clone();
|
|
||||||
|
|
||||||
EXPECT_EQ(sum(R1 == R3)[0], 255 * R3.total());
|
|
||||||
EXPECT_EQ(sum(P1 == P3)[0], 255 * P3.total());
|
|
||||||
}
|
}
|
||||||
|
|
||||||
TEST(ML_RTrees, getVotes)
|
template <>
|
||||||
|
struct ModelFactory<Boost> : public IModelFactory
|
||||||
{
|
{
|
||||||
int n = 12;
|
ModelFactory(int boostType_) : boostType(boostType_) {}
|
||||||
int count, i;
|
Ptr<StatModel> createNew(const DatasetDesc &) const CV_OVERRIDE
|
||||||
int label_size = 3;
|
|
||||||
int predicted_class = 0;
|
|
||||||
int max_votes = -1;
|
|
||||||
int val;
|
|
||||||
// RTrees for classification
|
|
||||||
Ptr<ml::RTrees> rt = cv::ml::RTrees::create();
|
|
||||||
|
|
||||||
//data
|
|
||||||
Mat data(n, 4, CV_32F);
|
|
||||||
randu(data, 0, 10);
|
|
||||||
|
|
||||||
//labels
|
|
||||||
Mat labels = (Mat_<int>(n,1) << 0,0,0,0, 1,1,1,1, 2,2,2,2);
|
|
||||||
|
|
||||||
rt->train(data, ml::ROW_SAMPLE, labels);
|
|
||||||
|
|
||||||
//run function
|
|
||||||
Mat test(1, 4, CV_32F);
|
|
||||||
Mat result;
|
|
||||||
randu(test, 0, 10);
|
|
||||||
rt->getVotes(test, result, 0);
|
|
||||||
|
|
||||||
//count vote amount and find highest vote
|
|
||||||
count = 0;
|
|
||||||
const int* result_row = result.ptr<int>(1);
|
|
||||||
for( i = 0; i < label_size; i++ )
|
|
||||||
{
|
{
|
||||||
val = result_row[i];
|
Ptr<Boost> m = Boost::create();
|
||||||
//predicted_class = max_votes < val? i;
|
m->setBoostType(boostType);
|
||||||
if( max_votes < val )
|
m->setWeakCount(20);
|
||||||
{
|
m->setWeightTrimRate(0.95);
|
||||||
max_votes = val;
|
m->setMaxDepth(4);
|
||||||
predicted_class = i;
|
m->setUseSurrogates(false);
|
||||||
}
|
m->setPriors(Mat());
|
||||||
count += val;
|
return m;
|
||||||
}
|
}
|
||||||
|
Ptr<StatModel> loadFromFile(const string &filename) const { return Boost::load(filename); }
|
||||||
|
string name() const CV_OVERRIDE { return "Boost"; }
|
||||||
|
int boostType;
|
||||||
|
};
|
||||||
|
|
||||||
EXPECT_EQ(count, (int)rt->getRoots().size());
|
template <>
|
||||||
EXPECT_EQ(result.at<float>(0, predicted_class), rt->predict(test));
|
struct ModelFactory<SVM> : public IModelFactory
|
||||||
|
{
|
||||||
|
ModelFactory(int svmType_, int kernelType_, double gamma_, double c_, double nu_)
|
||||||
|
: svmType(svmType_), kernelType(kernelType_), gamma(gamma_), c(c_), nu(nu_) {}
|
||||||
|
Ptr<StatModel> createNew(const DatasetDesc &) const CV_OVERRIDE
|
||||||
|
{
|
||||||
|
Ptr<SVM> m = SVM::create();
|
||||||
|
m->setType(svmType);
|
||||||
|
m->setKernel(kernelType);
|
||||||
|
m->setDegree(0);
|
||||||
|
m->setGamma(gamma);
|
||||||
|
m->setCoef0(0);
|
||||||
|
m->setC(c);
|
||||||
|
m->setNu(nu);
|
||||||
|
m->setP(0);
|
||||||
|
return m;
|
||||||
|
}
|
||||||
|
Ptr<StatModel> loadFromFile(const string &filename) const { return SVM::load(filename); }
|
||||||
|
string name() const CV_OVERRIDE { return "SVM"; }
|
||||||
|
int svmType;
|
||||||
|
int kernelType;
|
||||||
|
double gamma;
|
||||||
|
double c;
|
||||||
|
double nu;
|
||||||
|
};
|
||||||
|
|
||||||
|
//==================================================================================================
|
||||||
|
|
||||||
|
struct ML_Params_t
|
||||||
|
{
|
||||||
|
Ptr<IModelFactory> factory;
|
||||||
|
string dataset;
|
||||||
|
float mean;
|
||||||
|
float sigma;
|
||||||
|
};
|
||||||
|
|
||||||
|
void PrintTo(const ML_Params_t & param, std::ostream *os)
|
||||||
|
{
|
||||||
|
*os << param.factory->name() << "_" << param.dataset;
|
||||||
|
}
|
||||||
|
|
||||||
|
ML_Params_t ML_Params_List[] = {
|
||||||
|
{ makePtr< ModelFactory<DTrees> >(), "mushroom", 0.027401f, 0.036236f },
|
||||||
|
{ makePtr< ModelFactory<DTrees> >(), "adult", 14.279000f, 0.354323f },
|
||||||
|
{ makePtr< ModelFactory<DTrees> >(), "vehicle", 29.761162f, 4.823927f },
|
||||||
|
{ makePtr< ModelFactory<DTrees> >(), "abalone", 7.297540f, 0.510058f },
|
||||||
|
{ makePtr< ModelFactory<Boost> >(Boost::REAL), "adult", 13.894001f, 0.337763f },
|
||||||
|
{ makePtr< ModelFactory<Boost> >(Boost::DISCRETE), "mushroom", 0.007274f, 0.029400f },
|
||||||
|
{ makePtr< ModelFactory<Boost> >(Boost::LOGIT), "ringnorm", 9.993943f, 0.860256f },
|
||||||
|
{ makePtr< ModelFactory<Boost> >(Boost::GENTLE), "spambase", 5.404347f, 0.581716f },
|
||||||
|
{ makePtr< ModelFactory<RTrees> >(), "waveform", 17.100641f, 0.630052f },
|
||||||
|
{ makePtr< ModelFactory<RTrees> >(), "mushroom", 0.006547f, 0.028248f },
|
||||||
|
{ makePtr< ModelFactory<RTrees> >(), "adult", 13.5129f, 0.266065f },
|
||||||
|
{ makePtr< ModelFactory<RTrees> >(), "abalone", 4.745199f, 0.282112f },
|
||||||
|
{ makePtr< ModelFactory<RTrees> >(), "vehicle", 24.964712f, 4.469287f },
|
||||||
|
{ makePtr< ModelFactory<RTrees> >(), "letter", 5.334999f, 0.261142f },
|
||||||
|
{ makePtr< ModelFactory<RTrees> >(), "ringnorm", 6.248733f, 0.904713f },
|
||||||
|
{ makePtr< ModelFactory<RTrees> >(), "twonorm", 4.506479f, 0.449739f },
|
||||||
|
{ makePtr< ModelFactory<RTrees> >(), "spambase", 5.243477f, 0.54232f },
|
||||||
|
};
|
||||||
|
|
||||||
|
typedef testing::TestWithParam<ML_Params_t> ML_Params;
|
||||||
|
|
||||||
|
TEST_P(ML_Params, accuracy)
|
||||||
|
{
|
||||||
|
const ML_Params_t & param = GetParam();
|
||||||
|
DatasetDesc &dataset = getDataset(param.dataset);
|
||||||
|
Ptr<TrainData> data = dataset.load();
|
||||||
|
ASSERT_TRUE(data);
|
||||||
|
ASSERT_TRUE(data->getNSamples() > 0);
|
||||||
|
|
||||||
|
Ptr<StatModel> m = param.factory->createNew(dataset);
|
||||||
|
ASSERT_TRUE(m);
|
||||||
|
ASSERT_TRUE(m->train(data, 0));
|
||||||
|
|
||||||
|
float err = m->calcError(data, true, noArray());
|
||||||
|
EXPECT_NEAR(err, param.mean, 4 * param.sigma);
|
||||||
|
}
|
||||||
|
|
||||||
|
INSTANTIATE_TEST_CASE_P(/**/, ML_Params, testing::ValuesIn(ML_Params_List));
|
||||||
|
|
||||||
|
|
||||||
|
//==================================================================================================
|
||||||
|
|
||||||
|
struct ML_SL_Params_t
|
||||||
|
{
|
||||||
|
Ptr<IModelFactory> factory;
|
||||||
|
string dataset;
|
||||||
|
};
|
||||||
|
|
||||||
|
void PrintTo(const ML_SL_Params_t & param, std::ostream *os)
|
||||||
|
{
|
||||||
|
*os << param.factory->name() << "_" << param.dataset;
|
||||||
|
}
|
||||||
|
|
||||||
|
ML_SL_Params_t ML_SL_Params_List[] = {
|
||||||
|
{ makePtr< ModelFactory<NormalBayesClassifier> >(), "waveform" },
|
||||||
|
{ makePtr< ModelFactory<KNearest> >(), "waveform" },
|
||||||
|
{ makePtr< ModelFactory<KNearest> >(), "abalone" },
|
||||||
|
{ makePtr< ModelFactory<SVM> >(SVM::C_SVC, SVM::LINEAR, 1, 0.5, 0), "waveform" },
|
||||||
|
{ makePtr< ModelFactory<SVM> >(SVM::NU_SVR, SVM::RBF, 0.00225, 62.5, 0.03), "poletelecomm" },
|
||||||
|
{ makePtr< ModelFactory<DTrees> >(), "mushroom" },
|
||||||
|
{ makePtr< ModelFactory<DTrees> >(), "abalone" },
|
||||||
|
{ makePtr< ModelFactory<Boost> >(Boost::REAL), "adult" },
|
||||||
|
{ makePtr< ModelFactory<RTrees> >(), "waveform" },
|
||||||
|
{ makePtr< ModelFactory<RTrees> >(), "abalone" },
|
||||||
|
{ makePtr< ModelFactory<SVMSGD> >(), "waveform" },
|
||||||
|
};
|
||||||
|
|
||||||
|
typedef testing::TestWithParam<ML_SL_Params_t> ML_SL_Params;
|
||||||
|
|
||||||
|
TEST_P(ML_SL_Params, save_load)
|
||||||
|
{
|
||||||
|
const ML_SL_Params_t & param = GetParam();
|
||||||
|
|
||||||
|
DatasetDesc &dataset = getDataset(param.dataset);
|
||||||
|
Ptr<TrainData> data = dataset.load();
|
||||||
|
ASSERT_TRUE(data);
|
||||||
|
ASSERT_TRUE(data->getNSamples() > 0);
|
||||||
|
|
||||||
|
Mat responses1, responses2;
|
||||||
|
string file1 = tempfile(".json.gz");
|
||||||
|
string file2 = tempfile(".json.gz");
|
||||||
|
{
|
||||||
|
Ptr<StatModel> m = param.factory->createNew(dataset);
|
||||||
|
ASSERT_TRUE(m);
|
||||||
|
ASSERT_TRUE(m->train(data, 0));
|
||||||
|
m->calcError(data, true, responses1);
|
||||||
|
m->save(file1 + "?base64");
|
||||||
|
}
|
||||||
|
{
|
||||||
|
Ptr<StatModel> m = param.factory->loadFromFile(file1);
|
||||||
|
ASSERT_TRUE(m);
|
||||||
|
m->calcError(data, true, responses2);
|
||||||
|
m->save(file2 + "?base64");
|
||||||
|
}
|
||||||
|
EXPECT_MAT_NEAR(responses1, responses2, 0.0);
|
||||||
|
{
|
||||||
|
ifstream f1(file1.c_str(), std::ios_base::binary);
|
||||||
|
ifstream f2(file2.c_str(), std::ios_base::binary);
|
||||||
|
ASSERT_TRUE(f1.is_open() && f2.is_open());
|
||||||
|
const size_t BUFSZ = 10000;
|
||||||
|
vector<char> buf1(BUFSZ, 0);
|
||||||
|
vector<char> buf2(BUFSZ, 0);
|
||||||
|
while (true)
|
||||||
|
{
|
||||||
|
f1.read(&buf1[0], BUFSZ);
|
||||||
|
f2.read(&buf2[0], BUFSZ);
|
||||||
|
EXPECT_EQ(f1.gcount(), f2.gcount());
|
||||||
|
EXPECT_EQ(f1.eof(), f2.eof());
|
||||||
|
if (!f1.good() || !f2.good() || f1.gcount() != f2.gcount())
|
||||||
|
break;
|
||||||
|
ASSERT_EQ(buf1, buf2);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
remove(file1.c_str());
|
||||||
|
remove(file2.c_str());
|
||||||
|
}
|
||||||
|
|
||||||
|
INSTANTIATE_TEST_CASE_P(/**/, ML_SL_Params, testing::ValuesIn(ML_SL_Params_List));
|
||||||
|
|
||||||
|
//==================================================================================================
|
||||||
|
|
||||||
|
TEST(TrainDataGet, layout_ROW_SAMPLE) // Details: #12236
|
||||||
|
{
|
||||||
|
cv::Mat test = cv::Mat::ones(150, 30, CV_32FC1) * 2;
|
||||||
|
test.col(3) += Scalar::all(3);
|
||||||
|
cv::Mat labels = cv::Mat::ones(150, 3, CV_32SC1) * 5;
|
||||||
|
labels.col(1) += 1;
|
||||||
|
cv::Ptr<cv::ml::TrainData> train_data = cv::ml::TrainData::create(test, cv::ml::ROW_SAMPLE, labels);
|
||||||
|
train_data->setTrainTestSplitRatio(0.9);
|
||||||
|
|
||||||
|
Mat tidx = train_data->getTestSampleIdx();
|
||||||
|
EXPECT_EQ((size_t)15, tidx.total());
|
||||||
|
|
||||||
|
Mat tresp = train_data->getTestResponses();
|
||||||
|
EXPECT_EQ(15, tresp.rows);
|
||||||
|
EXPECT_EQ(labels.cols, tresp.cols);
|
||||||
|
EXPECT_EQ(5, tresp.at<int>(0, 0)) << tresp;
|
||||||
|
EXPECT_EQ(6, tresp.at<int>(0, 1)) << tresp;
|
||||||
|
EXPECT_EQ(6, tresp.at<int>(14, 1)) << tresp;
|
||||||
|
EXPECT_EQ(5, tresp.at<int>(14, 2)) << tresp;
|
||||||
|
|
||||||
|
Mat tsamples = train_data->getTestSamples();
|
||||||
|
EXPECT_EQ(15, tsamples.rows);
|
||||||
|
EXPECT_EQ(test.cols, tsamples.cols);
|
||||||
|
EXPECT_EQ(2, tsamples.at<float>(0, 0)) << tsamples;
|
||||||
|
EXPECT_EQ(5, tsamples.at<float>(0, 3)) << tsamples;
|
||||||
|
EXPECT_EQ(2, tsamples.at<float>(14, test.cols - 1)) << tsamples;
|
||||||
|
EXPECT_EQ(5, tsamples.at<float>(14, 3)) << tsamples;
|
||||||
|
}
|
||||||
|
|
||||||
|
TEST(TrainDataGet, layout_COL_SAMPLE) // Details: #12236
|
||||||
|
{
|
||||||
|
cv::Mat test = cv::Mat::ones(30, 150, CV_32FC1) * 3;
|
||||||
|
test.row(3) += Scalar::all(3);
|
||||||
|
cv::Mat labels = cv::Mat::ones(3, 150, CV_32SC1) * 5;
|
||||||
|
labels.row(1) += 1;
|
||||||
|
cv::Ptr<cv::ml::TrainData> train_data = cv::ml::TrainData::create(test, cv::ml::COL_SAMPLE, labels);
|
||||||
|
train_data->setTrainTestSplitRatio(0.9);
|
||||||
|
|
||||||
|
Mat tidx = train_data->getTestSampleIdx();
|
||||||
|
EXPECT_EQ((size_t)15, tidx.total());
|
||||||
|
|
||||||
|
Mat tresp = train_data->getTestResponses(); // always row-based, transposed
|
||||||
|
EXPECT_EQ(15, tresp.rows);
|
||||||
|
EXPECT_EQ(labels.rows, tresp.cols);
|
||||||
|
EXPECT_EQ(5, tresp.at<int>(0, 0)) << tresp;
|
||||||
|
EXPECT_EQ(6, tresp.at<int>(0, 1)) << tresp;
|
||||||
|
EXPECT_EQ(6, tresp.at<int>(14, 1)) << tresp;
|
||||||
|
EXPECT_EQ(5, tresp.at<int>(14, 2)) << tresp;
|
||||||
|
|
||||||
|
|
||||||
|
Mat tsamples = train_data->getTestSamples();
|
||||||
|
EXPECT_EQ(15, tsamples.cols);
|
||||||
|
EXPECT_EQ(test.rows, tsamples.rows);
|
||||||
|
EXPECT_EQ(3, tsamples.at<float>(0, 0)) << tsamples;
|
||||||
|
EXPECT_EQ(6, tsamples.at<float>(3, 0)) << tsamples;
|
||||||
|
EXPECT_EQ(6, tsamples.at<float>(3, 14)) << tsamples;
|
||||||
|
EXPECT_EQ(3, tsamples.at<float>(test.rows - 1, 14)) << tsamples;
|
||||||
}
|
}
|
||||||
|
|
||||||
}} // namespace
|
}} // namespace
|
||||||
/* End of file. */
|
|
||||||
|
@ -1,793 +0,0 @@
|
|||||||
/*M///////////////////////////////////////////////////////////////////////////////////////
|
|
||||||
//
|
|
||||||
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
|
|
||||||
//
|
|
||||||
// By downloading, copying, installing or using the software you agree to this license.
|
|
||||||
// If you do not agree to this license, do not download, install,
|
|
||||||
// copy or use the software.
|
|
||||||
//
|
|
||||||
//
|
|
||||||
// Intel License Agreement
|
|
||||||
// For Open Source Computer Vision Library
|
|
||||||
//
|
|
||||||
// Copyright (C) 2000, Intel Corporation, all rights reserved.
|
|
||||||
// Third party copyrights are property of their respective owners.
|
|
||||||
//
|
|
||||||
// Redistribution and use in source and binary forms, with or without modification,
|
|
||||||
// are permitted provided that the following conditions are met:
|
|
||||||
//
|
|
||||||
// * Redistribution's of source code must retain the above copyright notice,
|
|
||||||
// this list of conditions and the following disclaimer.
|
|
||||||
//
|
|
||||||
// * Redistribution's in binary form must reproduce the above copyright notice,
|
|
||||||
// this list of conditions and the following disclaimer in the documentation
|
|
||||||
// and/or other materials provided with the distribution.
|
|
||||||
//
|
|
||||||
// * The name of Intel Corporation may not be used to endorse or promote products
|
|
||||||
// derived from this software without specific prior written permission.
|
|
||||||
//
|
|
||||||
// This software is provided by the copyright holders and contributors "as is" and
|
|
||||||
// any express or implied warranties, including, but not limited to, the implied
|
|
||||||
// warranties of merchantability and fitness for a particular purpose are disclaimed.
|
|
||||||
// In no event shall the Intel Corporation or contributors be liable for any direct,
|
|
||||||
// indirect, incidental, special, exemplary, or consequential damages
|
|
||||||
// (including, but not limited to, procurement of substitute goods or services;
|
|
||||||
// loss of use, data, or profits; or business interruption) however caused
|
|
||||||
// and on any theory of liability, whether in contract, strict liability,
|
|
||||||
// or tort (including negligence or otherwise) arising in any way out of
|
|
||||||
// the use of this software, even if advised of the possibility of such damage.
|
|
||||||
//
|
|
||||||
//M*/
|
|
||||||
|
|
||||||
#include "test_precomp.hpp"
|
|
||||||
|
|
||||||
//#define GENERATE_TESTDATA
|
|
||||||
|
|
||||||
namespace opencv_test { namespace {
|
|
||||||
|
|
||||||
int str_to_svm_type(String& str)
|
|
||||||
{
|
|
||||||
if( !str.compare("C_SVC") )
|
|
||||||
return SVM::C_SVC;
|
|
||||||
if( !str.compare("NU_SVC") )
|
|
||||||
return SVM::NU_SVC;
|
|
||||||
if( !str.compare("ONE_CLASS") )
|
|
||||||
return SVM::ONE_CLASS;
|
|
||||||
if( !str.compare("EPS_SVR") )
|
|
||||||
return SVM::EPS_SVR;
|
|
||||||
if( !str.compare("NU_SVR") )
|
|
||||||
return SVM::NU_SVR;
|
|
||||||
CV_Error( CV_StsBadArg, "incorrect svm type string" );
|
|
||||||
}
|
|
||||||
int str_to_svm_kernel_type( String& str )
|
|
||||||
{
|
|
||||||
if( !str.compare("LINEAR") )
|
|
||||||
return SVM::LINEAR;
|
|
||||||
if( !str.compare("POLY") )
|
|
||||||
return SVM::POLY;
|
|
||||||
if( !str.compare("RBF") )
|
|
||||||
return SVM::RBF;
|
|
||||||
if( !str.compare("SIGMOID") )
|
|
||||||
return SVM::SIGMOID;
|
|
||||||
CV_Error( CV_StsBadArg, "incorrect svm type string" );
|
|
||||||
}
|
|
||||||
|
|
||||||
// 4. em
|
|
||||||
// 5. ann
|
|
||||||
int str_to_ann_train_method( String& str )
|
|
||||||
{
|
|
||||||
if( !str.compare("BACKPROP") )
|
|
||||||
return ANN_MLP::BACKPROP;
|
|
||||||
if (!str.compare("RPROP"))
|
|
||||||
return ANN_MLP::RPROP;
|
|
||||||
if (!str.compare("ANNEAL"))
|
|
||||||
return ANN_MLP::ANNEAL;
|
|
||||||
CV_Error( CV_StsBadArg, "incorrect ann train method string" );
|
|
||||||
}
|
|
||||||
|
|
||||||
#if 0
|
|
||||||
int str_to_ann_activation_function(String& str)
|
|
||||||
{
|
|
||||||
if (!str.compare("IDENTITY"))
|
|
||||||
return ANN_MLP::IDENTITY;
|
|
||||||
if (!str.compare("SIGMOID_SYM"))
|
|
||||||
return ANN_MLP::SIGMOID_SYM;
|
|
||||||
if (!str.compare("GAUSSIAN"))
|
|
||||||
return ANN_MLP::GAUSSIAN;
|
|
||||||
if (!str.compare("RELU"))
|
|
||||||
return ANN_MLP::RELU;
|
|
||||||
if (!str.compare("LEAKYRELU"))
|
|
||||||
return ANN_MLP::LEAKYRELU;
|
|
||||||
CV_Error(CV_StsBadArg, "incorrect ann activation function string");
|
|
||||||
}
|
|
||||||
#endif
|
|
||||||
|
|
||||||
void ann_check_data( Ptr<TrainData> _data )
|
|
||||||
{
|
|
||||||
CV_TRACE_FUNCTION();
|
|
||||||
CV_Assert(!_data.empty());
|
|
||||||
Mat values = _data->getSamples();
|
|
||||||
Mat var_idx = _data->getVarIdx();
|
|
||||||
int nvars = (int)var_idx.total();
|
|
||||||
if( nvars != 0 && nvars != values.cols )
|
|
||||||
CV_Error( CV_StsBadArg, "var_idx is not supported" );
|
|
||||||
if( !_data->getMissing().empty() )
|
|
||||||
CV_Error( CV_StsBadArg, "missing values are not supported" );
|
|
||||||
}
|
|
||||||
|
|
||||||
// unroll the categorical responses to binary vectors
|
|
||||||
Mat ann_get_new_responses( Ptr<TrainData> _data, map<int, int>& cls_map )
|
|
||||||
{
|
|
||||||
CV_TRACE_FUNCTION();
|
|
||||||
CV_Assert(!_data.empty());
|
|
||||||
Mat train_sidx = _data->getTrainSampleIdx();
|
|
||||||
int* train_sidx_ptr = train_sidx.ptr<int>();
|
|
||||||
Mat responses = _data->getResponses();
|
|
||||||
int cls_count = 0;
|
|
||||||
// construct cls_map
|
|
||||||
cls_map.clear();
|
|
||||||
int nresponses = (int)responses.total();
|
|
||||||
int si, n = !train_sidx.empty() ? (int)train_sidx.total() : nresponses;
|
|
||||||
|
|
||||||
for( si = 0; si < n; si++ )
|
|
||||||
{
|
|
||||||
int sidx = train_sidx_ptr ? train_sidx_ptr[si] : si;
|
|
||||||
int r = cvRound(responses.at<float>(sidx));
|
|
||||||
CV_DbgAssert( fabs(responses.at<float>(sidx) - r) < FLT_EPSILON );
|
|
||||||
map<int,int>::iterator it = cls_map.find(r);
|
|
||||||
if( it == cls_map.end() )
|
|
||||||
cls_map[r] = cls_count++;
|
|
||||||
}
|
|
||||||
Mat new_responses = Mat::zeros( nresponses, cls_count, CV_32F );
|
|
||||||
for( si = 0; si < n; si++ )
|
|
||||||
{
|
|
||||||
int sidx = train_sidx_ptr ? train_sidx_ptr[si] : si;
|
|
||||||
int r = cvRound(responses.at<float>(sidx));
|
|
||||||
int cidx = cls_map[r];
|
|
||||||
new_responses.at<float>(sidx, cidx) = 1.f;
|
|
||||||
}
|
|
||||||
return new_responses;
|
|
||||||
}
|
|
||||||
|
|
||||||
float ann_calc_error( Ptr<StatModel> ann, Ptr<TrainData> _data, map<int, int>& cls_map, int type, vector<float> *resp_labels )
|
|
||||||
{
|
|
||||||
CV_TRACE_FUNCTION();
|
|
||||||
CV_Assert(!ann.empty());
|
|
||||||
CV_Assert(!_data.empty());
|
|
||||||
float err = 0;
|
|
||||||
Mat samples = _data->getSamples();
|
|
||||||
Mat responses = _data->getResponses();
|
|
||||||
Mat sample_idx = (type == CV_TEST_ERROR) ? _data->getTestSampleIdx() : _data->getTrainSampleIdx();
|
|
||||||
int* sidx = !sample_idx.empty() ? sample_idx.ptr<int>() : 0;
|
|
||||||
ann_check_data( _data );
|
|
||||||
int sample_count = (int)sample_idx.total();
|
|
||||||
sample_count = (type == CV_TRAIN_ERROR && sample_count == 0) ? samples.rows : sample_count;
|
|
||||||
float* pred_resp = 0;
|
|
||||||
vector<float> innresp;
|
|
||||||
if( sample_count > 0 )
|
|
||||||
{
|
|
||||||
if( resp_labels )
|
|
||||||
{
|
|
||||||
resp_labels->resize( sample_count );
|
|
||||||
pred_resp = &((*resp_labels)[0]);
|
|
||||||
}
|
|
||||||
else
|
|
||||||
{
|
|
||||||
innresp.resize( sample_count );
|
|
||||||
pred_resp = &(innresp[0]);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
int cls_count = (int)cls_map.size();
|
|
||||||
Mat output( 1, cls_count, CV_32FC1 );
|
|
||||||
|
|
||||||
for( int i = 0; i < sample_count; i++ )
|
|
||||||
{
|
|
||||||
int si = sidx ? sidx[i] : i;
|
|
||||||
Mat sample = samples.row(si);
|
|
||||||
ann->predict( sample, output );
|
|
||||||
Point best_cls;
|
|
||||||
minMaxLoc(output, 0, 0, 0, &best_cls, 0);
|
|
||||||
int r = cvRound(responses.at<float>(si));
|
|
||||||
CV_DbgAssert( fabs(responses.at<float>(si) - r) < FLT_EPSILON );
|
|
||||||
r = cls_map[r];
|
|
||||||
int d = best_cls.x == r ? 0 : 1;
|
|
||||||
err += d;
|
|
||||||
pred_resp[i] = (float)best_cls.x;
|
|
||||||
}
|
|
||||||
err = sample_count ? err / (float)sample_count * 100 : -FLT_MAX;
|
|
||||||
return err;
|
|
||||||
}
|
|
||||||
|
|
||||||
TEST(ML_ANN, ActivationFunction)
|
|
||||||
{
|
|
||||||
String folder = string(cvtest::TS::ptr()->get_data_path());
|
|
||||||
String original_path = folder + "waveform.data";
|
|
||||||
String dataname = folder + "waveform";
|
|
||||||
|
|
||||||
Ptr<TrainData> tdata = TrainData::loadFromCSV(original_path, 0);
|
|
||||||
|
|
||||||
ASSERT_FALSE(tdata.empty()) << "Could not find test data file : " << original_path;
|
|
||||||
RNG& rng = theRNG();
|
|
||||||
rng.state = 1027401484159173092;
|
|
||||||
tdata->setTrainTestSplit(500);
|
|
||||||
|
|
||||||
vector<int> activationType;
|
|
||||||
activationType.push_back(ml::ANN_MLP::IDENTITY);
|
|
||||||
activationType.push_back(ml::ANN_MLP::SIGMOID_SYM);
|
|
||||||
activationType.push_back(ml::ANN_MLP::GAUSSIAN);
|
|
||||||
activationType.push_back(ml::ANN_MLP::RELU);
|
|
||||||
activationType.push_back(ml::ANN_MLP::LEAKYRELU);
|
|
||||||
vector<String> activationName;
|
|
||||||
activationName.push_back("_identity");
|
|
||||||
activationName.push_back("_sigmoid_sym");
|
|
||||||
activationName.push_back("_gaussian");
|
|
||||||
activationName.push_back("_relu");
|
|
||||||
activationName.push_back("_leakyrelu");
|
|
||||||
for (size_t i = 0; i < activationType.size(); i++)
|
|
||||||
{
|
|
||||||
Ptr<ml::ANN_MLP> x = ml::ANN_MLP::create();
|
|
||||||
Mat_<int> layerSizes(1, 4);
|
|
||||||
layerSizes(0, 0) = tdata->getNVars();
|
|
||||||
layerSizes(0, 1) = 100;
|
|
||||||
layerSizes(0, 2) = 100;
|
|
||||||
layerSizes(0, 3) = tdata->getResponses().cols;
|
|
||||||
x->setLayerSizes(layerSizes);
|
|
||||||
x->setActivationFunction(activationType[i]);
|
|
||||||
x->setTrainMethod(ml::ANN_MLP::RPROP, 0.01, 0.1);
|
|
||||||
x->setTermCriteria(TermCriteria(TermCriteria::COUNT, 300, 0.01));
|
|
||||||
x->train(tdata, ml::ANN_MLP::NO_OUTPUT_SCALE);
|
|
||||||
ASSERT_TRUE(x->isTrained()) << "Could not train networks with " << activationName[i];
|
|
||||||
#ifdef GENERATE_TESTDATA
|
|
||||||
x->save(dataname + activationName[i] + ".yml");
|
|
||||||
#else
|
|
||||||
Ptr<ml::ANN_MLP> y = Algorithm::load<ANN_MLP>(dataname + activationName[i] + ".yml");
|
|
||||||
ASSERT_TRUE(y) << "Could not load " << dataname + activationName[i] + ".yml";
|
|
||||||
Mat testSamples = tdata->getTestSamples();
|
|
||||||
Mat rx, ry, dst;
|
|
||||||
x->predict(testSamples, rx);
|
|
||||||
y->predict(testSamples, ry);
|
|
||||||
double n = cvtest::norm(rx, ry, NORM_INF);
|
|
||||||
EXPECT_LT(n,FLT_EPSILON) << "Predict are not equal for " << dataname + activationName[i] + ".yml and " << activationName[i];
|
|
||||||
#endif
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
CV_ENUM(ANN_MLP_METHOD, ANN_MLP::RPROP, ANN_MLP::ANNEAL)
|
|
||||||
|
|
||||||
typedef tuple<ANN_MLP_METHOD, string, int> ML_ANN_METHOD_Params;
|
|
||||||
typedef TestWithParam<ML_ANN_METHOD_Params> ML_ANN_METHOD;
|
|
||||||
|
|
||||||
TEST_P(ML_ANN_METHOD, Test)
|
|
||||||
{
|
|
||||||
int methodType = get<0>(GetParam());
|
|
||||||
string methodName = get<1>(GetParam());
|
|
||||||
int N = get<2>(GetParam());
|
|
||||||
|
|
||||||
String folder = string(cvtest::TS::ptr()->get_data_path());
|
|
||||||
String original_path = folder + "waveform.data";
|
|
||||||
String dataname = folder + "waveform" + '_' + methodName;
|
|
||||||
|
|
||||||
Ptr<TrainData> tdata2 = TrainData::loadFromCSV(original_path, 0);
|
|
||||||
ASSERT_FALSE(tdata2.empty()) << "Could not find test data file : " << original_path;
|
|
||||||
|
|
||||||
Mat samples = tdata2->getSamples()(Range(0, N), Range::all());
|
|
||||||
Mat responses(N, 3, CV_32FC1, Scalar(0));
|
|
||||||
for (int i = 0; i < N; i++)
|
|
||||||
responses.at<float>(i, static_cast<int>(tdata2->getResponses().at<float>(i, 0))) = 1;
|
|
||||||
Ptr<TrainData> tdata = TrainData::create(samples, ml::ROW_SAMPLE, responses);
|
|
||||||
ASSERT_FALSE(tdata.empty());
|
|
||||||
|
|
||||||
RNG& rng = theRNG();
|
|
||||||
rng.state = 0;
|
|
||||||
tdata->setTrainTestSplitRatio(0.8);
|
|
||||||
|
|
||||||
Mat testSamples = tdata->getTestSamples();
|
|
||||||
|
|
||||||
#ifdef GENERATE_TESTDATA
|
|
||||||
{
|
|
||||||
Ptr<ml::ANN_MLP> xx = ml::ANN_MLP::create();
|
|
||||||
Mat_<int> layerSizesXX(1, 4);
|
|
||||||
layerSizesXX(0, 0) = tdata->getNVars();
|
|
||||||
layerSizesXX(0, 1) = 30;
|
|
||||||
layerSizesXX(0, 2) = 30;
|
|
||||||
layerSizesXX(0, 3) = tdata->getResponses().cols;
|
|
||||||
xx->setLayerSizes(layerSizesXX);
|
|
||||||
xx->setActivationFunction(ml::ANN_MLP::SIGMOID_SYM);
|
|
||||||
xx->setTrainMethod(ml::ANN_MLP::RPROP);
|
|
||||||
xx->setTermCriteria(TermCriteria(TermCriteria::COUNT, 1, 0.01));
|
|
||||||
xx->train(tdata, ml::ANN_MLP::NO_OUTPUT_SCALE + ml::ANN_MLP::NO_INPUT_SCALE);
|
|
||||||
FileStorage fs;
|
|
||||||
fs.open(dataname + "_init_weight.yml.gz", FileStorage::WRITE + FileStorage::BASE64);
|
|
||||||
xx->write(fs);
|
|
||||||
fs.release();
|
|
||||||
}
|
|
||||||
#endif
|
|
||||||
{
|
|
||||||
FileStorage fs;
|
|
||||||
fs.open(dataname + "_init_weight.yml.gz", FileStorage::READ);
|
|
||||||
Ptr<ml::ANN_MLP> x = ml::ANN_MLP::create();
|
|
||||||
x->read(fs.root());
|
|
||||||
x->setTrainMethod(methodType);
|
|
||||||
if (methodType == ml::ANN_MLP::ANNEAL)
|
|
||||||
{
|
|
||||||
x->setAnnealEnergyRNG(RNG(CV_BIG_INT(0xffffffff)));
|
|
||||||
x->setAnnealInitialT(12);
|
|
||||||
x->setAnnealFinalT(0.15);
|
|
||||||
x->setAnnealCoolingRatio(0.96);
|
|
||||||
x->setAnnealItePerStep(11);
|
|
||||||
}
|
|
||||||
x->setTermCriteria(TermCriteria(TermCriteria::COUNT, 100, 0.01));
|
|
||||||
x->train(tdata, ml::ANN_MLP::NO_OUTPUT_SCALE + ml::ANN_MLP::NO_INPUT_SCALE + ml::ANN_MLP::UPDATE_WEIGHTS);
|
|
||||||
ASSERT_TRUE(x->isTrained()) << "Could not train networks with " << methodName;
|
|
||||||
string filename = dataname + ".yml.gz";
|
|
||||||
Mat r_gold;
|
|
||||||
#ifdef GENERATE_TESTDATA
|
|
||||||
x->save(filename);
|
|
||||||
x->predict(testSamples, r_gold);
|
|
||||||
{
|
|
||||||
FileStorage fs_response(dataname + "_response.yml.gz", FileStorage::WRITE + FileStorage::BASE64);
|
|
||||||
fs_response << "response" << r_gold;
|
|
||||||
}
|
|
||||||
#else
|
|
||||||
{
|
|
||||||
FileStorage fs_response(dataname + "_response.yml.gz", FileStorage::READ);
|
|
||||||
fs_response["response"] >> r_gold;
|
|
||||||
}
|
|
||||||
#endif
|
|
||||||
ASSERT_FALSE(r_gold.empty());
|
|
||||||
Ptr<ml::ANN_MLP> y = Algorithm::load<ANN_MLP>(filename);
|
|
||||||
ASSERT_TRUE(y) << "Could not load " << filename;
|
|
||||||
Mat rx, ry;
|
|
||||||
for (int j = 0; j < 4; j++)
|
|
||||||
{
|
|
||||||
rx = x->getWeights(j);
|
|
||||||
ry = y->getWeights(j);
|
|
||||||
double n = cvtest::norm(rx, ry, NORM_INF);
|
|
||||||
EXPECT_LT(n, FLT_EPSILON) << "Weights are not equal for layer: " << j;
|
|
||||||
}
|
|
||||||
x->predict(testSamples, rx);
|
|
||||||
y->predict(testSamples, ry);
|
|
||||||
double n = cvtest::norm(ry, rx, NORM_INF);
|
|
||||||
EXPECT_LT(n, FLT_EPSILON) << "Predict are not equal to result of the saved model";
|
|
||||||
n = cvtest::norm(r_gold, rx, NORM_INF);
|
|
||||||
EXPECT_LT(n, FLT_EPSILON) << "Predict are not equal to 'gold' response";
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
INSTANTIATE_TEST_CASE_P(/*none*/, ML_ANN_METHOD,
|
|
||||||
testing::Values(
|
|
||||||
make_tuple<ANN_MLP_METHOD, string, int>(ml::ANN_MLP::RPROP, "rprop", 5000),
|
|
||||||
make_tuple<ANN_MLP_METHOD, string, int>(ml::ANN_MLP::ANNEAL, "anneal", 1000)
|
|
||||||
//make_pair<ANN_MLP_METHOD, string>(ml::ANN_MLP::BACKPROP, "backprop", 5000); -----> NO BACKPROP TEST
|
|
||||||
)
|
|
||||||
);
|
|
||||||
|
|
||||||
|
|
||||||
// 6. dtree
|
|
||||||
// 7. boost
|
|
||||||
int str_to_boost_type( String& str )
|
|
||||||
{
|
|
||||||
if ( !str.compare("DISCRETE") )
|
|
||||||
return Boost::DISCRETE;
|
|
||||||
if ( !str.compare("REAL") )
|
|
||||||
return Boost::REAL;
|
|
||||||
if ( !str.compare("LOGIT") )
|
|
||||||
return Boost::LOGIT;
|
|
||||||
if ( !str.compare("GENTLE") )
|
|
||||||
return Boost::GENTLE;
|
|
||||||
CV_Error( CV_StsBadArg, "incorrect boost type string" );
|
|
||||||
}
|
|
||||||
|
|
||||||
// 8. rtrees
|
|
||||||
// 9. ertrees
|
|
||||||
|
|
||||||
int str_to_svmsgd_type( String& str )
|
|
||||||
{
|
|
||||||
if ( !str.compare("SGD") )
|
|
||||||
return SVMSGD::SGD;
|
|
||||||
if ( !str.compare("ASGD") )
|
|
||||||
return SVMSGD::ASGD;
|
|
||||||
CV_Error( CV_StsBadArg, "incorrect svmsgd type string" );
|
|
||||||
}
|
|
||||||
|
|
||||||
int str_to_margin_type( String& str )
|
|
||||||
{
|
|
||||||
if ( !str.compare("SOFT_MARGIN") )
|
|
||||||
return SVMSGD::SOFT_MARGIN;
|
|
||||||
if ( !str.compare("HARD_MARGIN") )
|
|
||||||
return SVMSGD::HARD_MARGIN;
|
|
||||||
CV_Error( CV_StsBadArg, "incorrect svmsgd margin type string" );
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
|
||||||
// ---------------------------------- MLBaseTest ---------------------------------------------------
|
|
||||||
|
|
||||||
CV_MLBaseTest::CV_MLBaseTest(const char* _modelName)
|
|
||||||
{
|
|
||||||
int64 seeds[] = { CV_BIG_INT(0x00009fff4f9c8d52),
|
|
||||||
CV_BIG_INT(0x0000a17166072c7c),
|
|
||||||
CV_BIG_INT(0x0201b32115cd1f9a),
|
|
||||||
CV_BIG_INT(0x0513cb37abcd1234),
|
|
||||||
CV_BIG_INT(0x0001a2b3c4d5f678)
|
|
||||||
};
|
|
||||||
|
|
||||||
int seedCount = sizeof(seeds)/sizeof(seeds[0]);
|
|
||||||
RNG& rng = theRNG();
|
|
||||||
|
|
||||||
initSeed = rng.state;
|
|
||||||
rng.state = seeds[rng(seedCount)];
|
|
||||||
|
|
||||||
modelName = _modelName;
|
|
||||||
}
|
|
||||||
|
|
||||||
CV_MLBaseTest::~CV_MLBaseTest()
|
|
||||||
{
|
|
||||||
if( validationFS.isOpened() )
|
|
||||||
validationFS.release();
|
|
||||||
theRNG().state = initSeed;
|
|
||||||
}
|
|
||||||
|
|
||||||
int CV_MLBaseTest::read_params( const cv::FileStorage& _fs )
|
|
||||||
{
|
|
||||||
CV_TRACE_FUNCTION();
|
|
||||||
if( !_fs.isOpened() )
|
|
||||||
test_case_count = -1;
|
|
||||||
else
|
|
||||||
{
|
|
||||||
FileNode fn = _fs.getFirstTopLevelNode()["run_params"][modelName];
|
|
||||||
test_case_count = (int)fn.size();
|
|
||||||
if( test_case_count <= 0 )
|
|
||||||
test_case_count = -1;
|
|
||||||
if( test_case_count > 0 )
|
|
||||||
{
|
|
||||||
dataSetNames.resize( test_case_count );
|
|
||||||
FileNodeIterator it = fn.begin();
|
|
||||||
for( int i = 0; i < test_case_count; i++, ++it )
|
|
||||||
{
|
|
||||||
dataSetNames[i] = (string)*it;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return cvtest::TS::OK;;
|
|
||||||
}
|
|
||||||
|
|
||||||
void CV_MLBaseTest::run( int )
|
|
||||||
{
|
|
||||||
CV_TRACE_FUNCTION();
|
|
||||||
string filename = ts->get_data_path();
|
|
||||||
filename += get_validation_filename();
|
|
||||||
validationFS.open( filename, FileStorage::READ );
|
|
||||||
read_params( validationFS );
|
|
||||||
|
|
||||||
int code = cvtest::TS::OK;
|
|
||||||
for (int i = 0; i < test_case_count; i++)
|
|
||||||
{
|
|
||||||
CV_TRACE_REGION("iteration");
|
|
||||||
int temp_code = run_test_case( i );
|
|
||||||
if (temp_code == cvtest::TS::OK)
|
|
||||||
temp_code = validate_test_results( i );
|
|
||||||
if (temp_code != cvtest::TS::OK)
|
|
||||||
code = temp_code;
|
|
||||||
}
|
|
||||||
if ( test_case_count <= 0)
|
|
||||||
{
|
|
||||||
ts->printf( cvtest::TS::LOG, "validation file is not determined or not correct" );
|
|
||||||
code = cvtest::TS::FAIL_INVALID_TEST_DATA;
|
|
||||||
}
|
|
||||||
ts->set_failed_test_info( code );
|
|
||||||
}
|
|
||||||
|
|
||||||
int CV_MLBaseTest::prepare_test_case( int test_case_idx )
|
|
||||||
{
|
|
||||||
CV_TRACE_FUNCTION();
|
|
||||||
clear();
|
|
||||||
|
|
||||||
string dataPath = ts->get_data_path();
|
|
||||||
if ( dataPath.empty() )
|
|
||||||
{
|
|
||||||
ts->printf( cvtest::TS::LOG, "data path is empty" );
|
|
||||||
return cvtest::TS::FAIL_INVALID_TEST_DATA;
|
|
||||||
}
|
|
||||||
|
|
||||||
string dataName = dataSetNames[test_case_idx],
|
|
||||||
filename = dataPath + dataName + ".data";
|
|
||||||
|
|
||||||
FileNode dataParamsNode = validationFS.getFirstTopLevelNode()["validation"][modelName][dataName]["data_params"];
|
|
||||||
CV_DbgAssert( !dataParamsNode.empty() );
|
|
||||||
|
|
||||||
CV_DbgAssert( !dataParamsNode["LS"].empty() );
|
|
||||||
int trainSampleCount = (int)dataParamsNode["LS"];
|
|
||||||
|
|
||||||
CV_DbgAssert( !dataParamsNode["resp_idx"].empty() );
|
|
||||||
int respIdx = (int)dataParamsNode["resp_idx"];
|
|
||||||
|
|
||||||
CV_DbgAssert( !dataParamsNode["types"].empty() );
|
|
||||||
String varTypes = (String)dataParamsNode["types"];
|
|
||||||
|
|
||||||
data = TrainData::loadFromCSV(filename, 0, respIdx, respIdx+1, varTypes);
|
|
||||||
if( data.empty() )
|
|
||||||
{
|
|
||||||
ts->printf( cvtest::TS::LOG, "file %s can not be read\n", filename.c_str() );
|
|
||||||
return cvtest::TS::FAIL_INVALID_TEST_DATA;
|
|
||||||
}
|
|
||||||
|
|
||||||
data->setTrainTestSplit(trainSampleCount);
|
|
||||||
return cvtest::TS::OK;
|
|
||||||
}
|
|
||||||
|
|
||||||
string& CV_MLBaseTest::get_validation_filename()
|
|
||||||
{
|
|
||||||
return validationFN;
|
|
||||||
}
|
|
||||||
|
|
||||||
int CV_MLBaseTest::train( int testCaseIdx )
|
|
||||||
{
|
|
||||||
CV_TRACE_FUNCTION();
|
|
||||||
bool is_trained = false;
|
|
||||||
FileNode modelParamsNode =
|
|
||||||
validationFS.getFirstTopLevelNode()["validation"][modelName][dataSetNames[testCaseIdx]]["model_params"];
|
|
||||||
|
|
||||||
if( modelName == CV_NBAYES )
|
|
||||||
model = NormalBayesClassifier::create();
|
|
||||||
else if( modelName == CV_KNEAREST )
|
|
||||||
{
|
|
||||||
model = KNearest::create();
|
|
||||||
}
|
|
||||||
else if( modelName == CV_SVM )
|
|
||||||
{
|
|
||||||
String svm_type_str, kernel_type_str;
|
|
||||||
modelParamsNode["svm_type"] >> svm_type_str;
|
|
||||||
modelParamsNode["kernel_type"] >> kernel_type_str;
|
|
||||||
Ptr<SVM> m = SVM::create();
|
|
||||||
m->setType(str_to_svm_type( svm_type_str ));
|
|
||||||
m->setKernel(str_to_svm_kernel_type( kernel_type_str ));
|
|
||||||
m->setDegree(modelParamsNode["degree"]);
|
|
||||||
m->setGamma(modelParamsNode["gamma"]);
|
|
||||||
m->setCoef0(modelParamsNode["coef0"]);
|
|
||||||
m->setC(modelParamsNode["C"]);
|
|
||||||
m->setNu(modelParamsNode["nu"]);
|
|
||||||
m->setP(modelParamsNode["p"]);
|
|
||||||
model = m;
|
|
||||||
}
|
|
||||||
else if( modelName == CV_EM )
|
|
||||||
{
|
|
||||||
assert( 0 );
|
|
||||||
}
|
|
||||||
else if( modelName == CV_ANN )
|
|
||||||
{
|
|
||||||
String train_method_str;
|
|
||||||
double param1, param2;
|
|
||||||
modelParamsNode["train_method"] >> train_method_str;
|
|
||||||
modelParamsNode["param1"] >> param1;
|
|
||||||
modelParamsNode["param2"] >> param2;
|
|
||||||
Mat new_responses = ann_get_new_responses( data, cls_map );
|
|
||||||
// binarize the responses
|
|
||||||
data = TrainData::create(data->getSamples(), data->getLayout(), new_responses,
|
|
||||||
data->getVarIdx(), data->getTrainSampleIdx());
|
|
||||||
int layer_sz[] = { data->getNAllVars(), 100, 100, (int)cls_map.size() };
|
|
||||||
Mat layer_sizes( 1, (int)(sizeof(layer_sz)/sizeof(layer_sz[0])), CV_32S, layer_sz );
|
|
||||||
Ptr<ANN_MLP> m = ANN_MLP::create();
|
|
||||||
m->setLayerSizes(layer_sizes);
|
|
||||||
m->setActivationFunction(ANN_MLP::SIGMOID_SYM, 0, 0);
|
|
||||||
m->setTermCriteria(TermCriteria(TermCriteria::COUNT,300,0.01));
|
|
||||||
m->setTrainMethod(str_to_ann_train_method(train_method_str), param1, param2);
|
|
||||||
model = m;
|
|
||||||
|
|
||||||
}
|
|
||||||
else if( modelName == CV_DTREE )
|
|
||||||
{
|
|
||||||
int MAX_DEPTH, MIN_SAMPLE_COUNT, MAX_CATEGORIES, CV_FOLDS;
|
|
||||||
float REG_ACCURACY = 0;
|
|
||||||
bool USE_SURROGATE = false, IS_PRUNED;
|
|
||||||
modelParamsNode["max_depth"] >> MAX_DEPTH;
|
|
||||||
modelParamsNode["min_sample_count"] >> MIN_SAMPLE_COUNT;
|
|
||||||
//modelParamsNode["use_surrogate"] >> USE_SURROGATE;
|
|
||||||
modelParamsNode["max_categories"] >> MAX_CATEGORIES;
|
|
||||||
modelParamsNode["cv_folds"] >> CV_FOLDS;
|
|
||||||
modelParamsNode["is_pruned"] >> IS_PRUNED;
|
|
||||||
|
|
||||||
Ptr<DTrees> m = DTrees::create();
|
|
||||||
m->setMaxDepth(MAX_DEPTH);
|
|
||||||
m->setMinSampleCount(MIN_SAMPLE_COUNT);
|
|
||||||
m->setRegressionAccuracy(REG_ACCURACY);
|
|
||||||
m->setUseSurrogates(USE_SURROGATE);
|
|
||||||
m->setMaxCategories(MAX_CATEGORIES);
|
|
||||||
m->setCVFolds(CV_FOLDS);
|
|
||||||
m->setUse1SERule(false);
|
|
||||||
m->setTruncatePrunedTree(IS_PRUNED);
|
|
||||||
m->setPriors(Mat());
|
|
||||||
model = m;
|
|
||||||
}
|
|
||||||
else if( modelName == CV_BOOST )
|
|
||||||
{
|
|
||||||
int BOOST_TYPE, WEAK_COUNT, MAX_DEPTH;
|
|
||||||
float WEIGHT_TRIM_RATE;
|
|
||||||
bool USE_SURROGATE = false;
|
|
||||||
String typeStr;
|
|
||||||
modelParamsNode["type"] >> typeStr;
|
|
||||||
BOOST_TYPE = str_to_boost_type( typeStr );
|
|
||||||
modelParamsNode["weak_count"] >> WEAK_COUNT;
|
|
||||||
modelParamsNode["weight_trim_rate"] >> WEIGHT_TRIM_RATE;
|
|
||||||
modelParamsNode["max_depth"] >> MAX_DEPTH;
|
|
||||||
//modelParamsNode["use_surrogate"] >> USE_SURROGATE;
|
|
||||||
|
|
||||||
Ptr<Boost> m = Boost::create();
|
|
||||||
m->setBoostType(BOOST_TYPE);
|
|
||||||
m->setWeakCount(WEAK_COUNT);
|
|
||||||
m->setWeightTrimRate(WEIGHT_TRIM_RATE);
|
|
||||||
m->setMaxDepth(MAX_DEPTH);
|
|
||||||
m->setUseSurrogates(USE_SURROGATE);
|
|
||||||
m->setPriors(Mat());
|
|
||||||
model = m;
|
|
||||||
}
|
|
||||||
else if( modelName == CV_RTREES )
|
|
||||||
{
|
|
||||||
int MAX_DEPTH, MIN_SAMPLE_COUNT, MAX_CATEGORIES, CV_FOLDS, NACTIVE_VARS, MAX_TREES_NUM;
|
|
||||||
float REG_ACCURACY = 0, OOB_EPS = 0.0;
|
|
||||||
bool USE_SURROGATE = false, IS_PRUNED;
|
|
||||||
modelParamsNode["max_depth"] >> MAX_DEPTH;
|
|
||||||
modelParamsNode["min_sample_count"] >> MIN_SAMPLE_COUNT;
|
|
||||||
//modelParamsNode["use_surrogate"] >> USE_SURROGATE;
|
|
||||||
modelParamsNode["max_categories"] >> MAX_CATEGORIES;
|
|
||||||
modelParamsNode["cv_folds"] >> CV_FOLDS;
|
|
||||||
modelParamsNode["is_pruned"] >> IS_PRUNED;
|
|
||||||
modelParamsNode["nactive_vars"] >> NACTIVE_VARS;
|
|
||||||
modelParamsNode["max_trees_num"] >> MAX_TREES_NUM;
|
|
||||||
|
|
||||||
Ptr<RTrees> m = RTrees::create();
|
|
||||||
m->setMaxDepth(MAX_DEPTH);
|
|
||||||
m->setMinSampleCount(MIN_SAMPLE_COUNT);
|
|
||||||
m->setRegressionAccuracy(REG_ACCURACY);
|
|
||||||
m->setUseSurrogates(USE_SURROGATE);
|
|
||||||
m->setMaxCategories(MAX_CATEGORIES);
|
|
||||||
m->setPriors(Mat());
|
|
||||||
m->setCalculateVarImportance(true);
|
|
||||||
m->setActiveVarCount(NACTIVE_VARS);
|
|
||||||
m->setTermCriteria(TermCriteria(TermCriteria::COUNT, MAX_TREES_NUM, OOB_EPS));
|
|
||||||
model = m;
|
|
||||||
}
|
|
||||||
|
|
||||||
else if( modelName == CV_SVMSGD )
|
|
||||||
{
|
|
||||||
String svmsgdTypeStr;
|
|
||||||
modelParamsNode["svmsgdType"] >> svmsgdTypeStr;
|
|
||||||
|
|
||||||
Ptr<SVMSGD> m = SVMSGD::create();
|
|
||||||
int svmsgdType = str_to_svmsgd_type( svmsgdTypeStr );
|
|
||||||
m->setSvmsgdType(svmsgdType);
|
|
||||||
|
|
||||||
String marginTypeStr;
|
|
||||||
modelParamsNode["marginType"] >> marginTypeStr;
|
|
||||||
int marginType = str_to_margin_type( marginTypeStr );
|
|
||||||
m->setMarginType(marginType);
|
|
||||||
|
|
||||||
m->setMarginRegularization(modelParamsNode["marginRegularization"]);
|
|
||||||
m->setInitialStepSize(modelParamsNode["initialStepSize"]);
|
|
||||||
m->setStepDecreasingPower(modelParamsNode["stepDecreasingPower"]);
|
|
||||||
m->setTermCriteria(TermCriteria(TermCriteria::COUNT + TermCriteria::EPS, 10000, 0.00001));
|
|
||||||
model = m;
|
|
||||||
}
|
|
||||||
|
|
||||||
if( !model.empty() )
|
|
||||||
is_trained = model->train(data, 0);
|
|
||||||
|
|
||||||
if( !is_trained )
|
|
||||||
{
|
|
||||||
ts->printf( cvtest::TS::LOG, "in test case %d model training was failed", testCaseIdx );
|
|
||||||
return cvtest::TS::FAIL_INVALID_OUTPUT;
|
|
||||||
}
|
|
||||||
return cvtest::TS::OK;
|
|
||||||
}
|
|
||||||
|
|
||||||
float CV_MLBaseTest::get_test_error( int /*testCaseIdx*/, vector<float> *resp )
|
|
||||||
{
|
|
||||||
CV_TRACE_FUNCTION();
|
|
||||||
int type = CV_TEST_ERROR;
|
|
||||||
float err = 0;
|
|
||||||
Mat _resp;
|
|
||||||
if( modelName == CV_EM )
|
|
||||||
assert( 0 );
|
|
||||||
else if( modelName == CV_ANN )
|
|
||||||
err = ann_calc_error( model, data, cls_map, type, resp );
|
|
||||||
else if( modelName == CV_DTREE || modelName == CV_BOOST || modelName == CV_RTREES ||
|
|
||||||
modelName == CV_SVM || modelName == CV_NBAYES || modelName == CV_KNEAREST || modelName == CV_SVMSGD )
|
|
||||||
err = model->calcError( data, true, _resp );
|
|
||||||
if( !_resp.empty() && resp )
|
|
||||||
_resp.convertTo(*resp, CV_32F);
|
|
||||||
return err;
|
|
||||||
}
|
|
||||||
|
|
||||||
void CV_MLBaseTest::save( const char* filename )
|
|
||||||
{
|
|
||||||
CV_TRACE_FUNCTION();
|
|
||||||
model->save( filename );
|
|
||||||
}
|
|
||||||
|
|
||||||
void CV_MLBaseTest::load( const char* filename )
|
|
||||||
{
|
|
||||||
CV_TRACE_FUNCTION();
|
|
||||||
if( modelName == CV_NBAYES )
|
|
||||||
model = Algorithm::load<NormalBayesClassifier>( filename );
|
|
||||||
else if( modelName == CV_KNEAREST )
|
|
||||||
model = Algorithm::load<KNearest>( filename );
|
|
||||||
else if( modelName == CV_SVM )
|
|
||||||
model = Algorithm::load<SVM>( filename );
|
|
||||||
else if( modelName == CV_ANN )
|
|
||||||
model = Algorithm::load<ANN_MLP>( filename );
|
|
||||||
else if( modelName == CV_DTREE )
|
|
||||||
model = Algorithm::load<DTrees>( filename );
|
|
||||||
else if( modelName == CV_BOOST )
|
|
||||||
model = Algorithm::load<Boost>( filename );
|
|
||||||
else if( modelName == CV_RTREES )
|
|
||||||
model = Algorithm::load<RTrees>( filename );
|
|
||||||
else if( modelName == CV_SVMSGD )
|
|
||||||
model = Algorithm::load<SVMSGD>( filename );
|
|
||||||
else
|
|
||||||
CV_Error( CV_StsNotImplemented, "invalid stat model name");
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
TEST(TrainDataGet, layout_ROW_SAMPLE) // Details: #12236
|
|
||||||
{
|
|
||||||
cv::Mat test = cv::Mat::ones(150, 30, CV_32FC1) * 2;
|
|
||||||
test.col(3) += Scalar::all(3);
|
|
||||||
cv::Mat labels = cv::Mat::ones(150, 3, CV_32SC1) * 5;
|
|
||||||
labels.col(1) += 1;
|
|
||||||
cv::Ptr<cv::ml::TrainData> train_data = cv::ml::TrainData::create(test, cv::ml::ROW_SAMPLE, labels);
|
|
||||||
train_data->setTrainTestSplitRatio(0.9);
|
|
||||||
|
|
||||||
Mat tidx = train_data->getTestSampleIdx();
|
|
||||||
EXPECT_EQ((size_t)15, tidx.total());
|
|
||||||
|
|
||||||
Mat tresp = train_data->getTestResponses();
|
|
||||||
EXPECT_EQ(15, tresp.rows);
|
|
||||||
EXPECT_EQ(labels.cols, tresp.cols);
|
|
||||||
EXPECT_EQ(5, tresp.at<int>(0, 0)) << tresp;
|
|
||||||
EXPECT_EQ(6, tresp.at<int>(0, 1)) << tresp;
|
|
||||||
EXPECT_EQ(6, tresp.at<int>(14, 1)) << tresp;
|
|
||||||
EXPECT_EQ(5, tresp.at<int>(14, 2)) << tresp;
|
|
||||||
|
|
||||||
Mat tsamples = train_data->getTestSamples();
|
|
||||||
EXPECT_EQ(15, tsamples.rows);
|
|
||||||
EXPECT_EQ(test.cols, tsamples.cols);
|
|
||||||
EXPECT_EQ(2, tsamples.at<float>(0, 0)) << tsamples;
|
|
||||||
EXPECT_EQ(5, tsamples.at<float>(0, 3)) << tsamples;
|
|
||||||
EXPECT_EQ(2, tsamples.at<float>(14, test.cols - 1)) << tsamples;
|
|
||||||
EXPECT_EQ(5, tsamples.at<float>(14, 3)) << tsamples;
|
|
||||||
}
|
|
||||||
|
|
||||||
TEST(TrainDataGet, layout_COL_SAMPLE) // Details: #12236
|
|
||||||
{
|
|
||||||
cv::Mat test = cv::Mat::ones(30, 150, CV_32FC1) * 3;
|
|
||||||
test.row(3) += Scalar::all(3);
|
|
||||||
cv::Mat labels = cv::Mat::ones(3, 150, CV_32SC1) * 5;
|
|
||||||
labels.row(1) += 1;
|
|
||||||
cv::Ptr<cv::ml::TrainData> train_data = cv::ml::TrainData::create(test, cv::ml::COL_SAMPLE, labels);
|
|
||||||
train_data->setTrainTestSplitRatio(0.9);
|
|
||||||
|
|
||||||
Mat tidx = train_data->getTestSampleIdx();
|
|
||||||
EXPECT_EQ((size_t)15, tidx.total());
|
|
||||||
|
|
||||||
Mat tresp = train_data->getTestResponses(); // always row-based, transposed
|
|
||||||
EXPECT_EQ(15, tresp.rows);
|
|
||||||
EXPECT_EQ(labels.rows, tresp.cols);
|
|
||||||
EXPECT_EQ(5, tresp.at<int>(0, 0)) << tresp;
|
|
||||||
EXPECT_EQ(6, tresp.at<int>(0, 1)) << tresp;
|
|
||||||
EXPECT_EQ(6, tresp.at<int>(14, 1)) << tresp;
|
|
||||||
EXPECT_EQ(5, tresp.at<int>(14, 2)) << tresp;
|
|
||||||
|
|
||||||
|
|
||||||
Mat tsamples = train_data->getTestSamples();
|
|
||||||
EXPECT_EQ(15, tsamples.cols);
|
|
||||||
EXPECT_EQ(test.rows, tsamples.rows);
|
|
||||||
EXPECT_EQ(3, tsamples.at<float>(0, 0)) << tsamples;
|
|
||||||
EXPECT_EQ(6, tsamples.at<float>(3, 0)) << tsamples;
|
|
||||||
EXPECT_EQ(6, tsamples.at<float>(3, 14)) << tsamples;
|
|
||||||
EXPECT_EQ(3, tsamples.at<float>(test.rows - 1, 14)) << tsamples;
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
} // namespace
|
|
||||||
/* End of file. */
|
|
@ -2,10 +2,15 @@
|
|||||||
#define __OPENCV_TEST_PRECOMP_HPP__
|
#define __OPENCV_TEST_PRECOMP_HPP__
|
||||||
|
|
||||||
#include "opencv2/ts.hpp"
|
#include "opencv2/ts.hpp"
|
||||||
|
#include <opencv2/ts/cuda_test.hpp> // EXPECT_MAT_NEAR
|
||||||
#include "opencv2/ml.hpp"
|
#include "opencv2/ml.hpp"
|
||||||
#include "opencv2/core/core_c.h"
|
#include "opencv2/core/core_c.h"
|
||||||
|
|
||||||
|
#include <fstream>
|
||||||
|
using std::ifstream;
|
||||||
|
|
||||||
namespace opencv_test {
|
namespace opencv_test {
|
||||||
|
|
||||||
using namespace cv::ml;
|
using namespace cv::ml;
|
||||||
|
|
||||||
#define CV_NBAYES "nbayes"
|
#define CV_NBAYES "nbayes"
|
||||||
@ -19,8 +24,6 @@ using namespace cv::ml;
|
|||||||
#define CV_ERTREES "ertrees"
|
#define CV_ERTREES "ertrees"
|
||||||
#define CV_SVMSGD "svmsgd"
|
#define CV_SVMSGD "svmsgd"
|
||||||
|
|
||||||
enum { CV_TRAIN_ERROR=0, CV_TEST_ERROR=1 };
|
|
||||||
|
|
||||||
using cv::Ptr;
|
using cv::Ptr;
|
||||||
using cv::ml::StatModel;
|
using cv::ml::StatModel;
|
||||||
using cv::ml::TrainData;
|
using cv::ml::TrainData;
|
||||||
@ -34,58 +37,14 @@ using cv::ml::Boost;
|
|||||||
using cv::ml::RTrees;
|
using cv::ml::RTrees;
|
||||||
using cv::ml::SVMSGD;
|
using cv::ml::SVMSGD;
|
||||||
|
|
||||||
class CV_MLBaseTest : public cvtest::BaseTest
|
void defaultDistribs( Mat& means, vector<Mat>& covs, int type=CV_32FC1 );
|
||||||
{
|
void generateData( Mat& data, Mat& labels, const vector<int>& sizes, const Mat& _means, const vector<Mat>& covs, int dataType, int labelType );
|
||||||
public:
|
int maxIdx( const vector<int>& count );
|
||||||
CV_MLBaseTest( const char* _modelName );
|
bool getLabelsMap( const Mat& labels, const vector<int>& sizes, vector<int>& labelsMap, bool checkClusterUniq=true );
|
||||||
virtual ~CV_MLBaseTest();
|
bool calcErr( const Mat& labels, const Mat& origLabels, const vector<int>& sizes, float& err, bool labelsEquivalent = true, bool checkClusterUniq=true );
|
||||||
protected:
|
|
||||||
virtual int read_params( const cv::FileStorage& fs );
|
|
||||||
virtual void run( int startFrom );
|
|
||||||
virtual int prepare_test_case( int testCaseIdx );
|
|
||||||
virtual std::string& get_validation_filename();
|
|
||||||
virtual int run_test_case( int testCaseIdx ) = 0;
|
|
||||||
virtual int validate_test_results( int testCaseIdx ) = 0;
|
|
||||||
|
|
||||||
int train( int testCaseIdx );
|
// used in LR test
|
||||||
float get_test_error( int testCaseIdx, std::vector<float> *resp = 0 );
|
bool calculateError( const Mat& _p_labels, const Mat& _o_labels, float& error);
|
||||||
void save( const char* filename );
|
|
||||||
void load( const char* filename );
|
|
||||||
|
|
||||||
Ptr<TrainData> data;
|
|
||||||
std::string modelName, validationFN;
|
|
||||||
std::vector<std::string> dataSetNames;
|
|
||||||
cv::FileStorage validationFS;
|
|
||||||
|
|
||||||
Ptr<StatModel> model;
|
|
||||||
|
|
||||||
std::map<int, int> cls_map;
|
|
||||||
|
|
||||||
int64 initSeed;
|
|
||||||
};
|
|
||||||
|
|
||||||
class CV_AMLTest : public CV_MLBaseTest
|
|
||||||
{
|
|
||||||
public:
|
|
||||||
CV_AMLTest( const char* _modelName );
|
|
||||||
virtual ~CV_AMLTest() {}
|
|
||||||
protected:
|
|
||||||
virtual int run_test_case( int testCaseIdx );
|
|
||||||
virtual int validate_test_results( int testCaseIdx );
|
|
||||||
};
|
|
||||||
|
|
||||||
class CV_SLMLTest : public CV_MLBaseTest
|
|
||||||
{
|
|
||||||
public:
|
|
||||||
CV_SLMLTest( const char* _modelName );
|
|
||||||
virtual ~CV_SLMLTest() {}
|
|
||||||
protected:
|
|
||||||
virtual int run_test_case( int testCaseIdx );
|
|
||||||
virtual int validate_test_results( int testCaseIdx );
|
|
||||||
|
|
||||||
std::vector<float> test_resps1, test_resps2; // predicted responses for test data
|
|
||||||
std::string fname1, fname2;
|
|
||||||
};
|
|
||||||
|
|
||||||
} // namespace
|
} // namespace
|
||||||
|
|
||||||
|
54
modules/ml/test/test_rtrees.cpp
Normal file
54
modules/ml/test/test_rtrees.cpp
Normal file
@ -0,0 +1,54 @@
|
|||||||
|
// This file is part of OpenCV project.
|
||||||
|
// It is subject to the license terms in the LICENSE file found in the top-level directory
|
||||||
|
// of this distribution and at http://opencv.org/license.html.
|
||||||
|
|
||||||
|
#include "test_precomp.hpp"
|
||||||
|
|
||||||
|
namespace opencv_test { namespace {
|
||||||
|
|
||||||
|
TEST(ML_RTrees, getVotes)
|
||||||
|
{
|
||||||
|
int n = 12;
|
||||||
|
int count, i;
|
||||||
|
int label_size = 3;
|
||||||
|
int predicted_class = 0;
|
||||||
|
int max_votes = -1;
|
||||||
|
int val;
|
||||||
|
// RTrees for classification
|
||||||
|
Ptr<ml::RTrees> rt = cv::ml::RTrees::create();
|
||||||
|
|
||||||
|
//data
|
||||||
|
Mat data(n, 4, CV_32F);
|
||||||
|
randu(data, 0, 10);
|
||||||
|
|
||||||
|
//labels
|
||||||
|
Mat labels = (Mat_<int>(n,1) << 0,0,0,0, 1,1,1,1, 2,2,2,2);
|
||||||
|
|
||||||
|
rt->train(data, ml::ROW_SAMPLE, labels);
|
||||||
|
|
||||||
|
//run function
|
||||||
|
Mat test(1, 4, CV_32F);
|
||||||
|
Mat result;
|
||||||
|
randu(test, 0, 10);
|
||||||
|
rt->getVotes(test, result, 0);
|
||||||
|
|
||||||
|
//count vote amount and find highest vote
|
||||||
|
count = 0;
|
||||||
|
const int* result_row = result.ptr<int>(1);
|
||||||
|
for( i = 0; i < label_size; i++ )
|
||||||
|
{
|
||||||
|
val = result_row[i];
|
||||||
|
//predicted_class = max_votes < val? i;
|
||||||
|
if( max_votes < val )
|
||||||
|
{
|
||||||
|
max_votes = val;
|
||||||
|
predicted_class = i;
|
||||||
|
}
|
||||||
|
count += val;
|
||||||
|
}
|
||||||
|
|
||||||
|
EXPECT_EQ(count, (int)rt->getRoots().size());
|
||||||
|
EXPECT_EQ(result.at<float>(0, predicted_class), rt->predict(test));
|
||||||
|
}
|
||||||
|
|
||||||
|
}} // namespace
|
@ -1,267 +1,100 @@
|
|||||||
/*M///////////////////////////////////////////////////////////////////////////////////////
|
// This file is part of OpenCV project.
|
||||||
//
|
// It is subject to the license terms in the LICENSE file found in the top-level directory
|
||||||
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
|
// of this distribution and at http://opencv.org/license.html.
|
||||||
//
|
|
||||||
// By downloading, copying, installing or using the software you agree to this license.
|
|
||||||
// If you do not agree to this license, do not download, install,
|
|
||||||
// copy or use the software.
|
|
||||||
//
|
|
||||||
//
|
|
||||||
// Intel License Agreement
|
|
||||||
// For Open Source Computer Vision Library
|
|
||||||
//
|
|
||||||
// Copyright (C) 2000, Intel Corporation, all rights reserved.
|
|
||||||
// Third party copyrights are property of their respective owners.
|
|
||||||
//
|
|
||||||
// Redistribution and use in source and binary forms, with or without modification,
|
|
||||||
// are permitted provided that the following conditions are met:
|
|
||||||
//
|
|
||||||
// * Redistribution's of source code must retain the above copyright notice,
|
|
||||||
// this list of conditions and the following disclaimer.
|
|
||||||
//
|
|
||||||
// * Redistribution's in binary form must reproduce the above copyright notice,
|
|
||||||
// this list of conditions and the following disclaimer in the documentation
|
|
||||||
// and/or other materials provided with the distribution.
|
|
||||||
//
|
|
||||||
// * The name of Intel Corporation may not be used to endorse or promote products
|
|
||||||
// derived from this software without specific prior written permission.
|
|
||||||
//
|
|
||||||
// This software is provided by the copyright holders and contributors "as is" and
|
|
||||||
// any express or implied warranties, including, but not limited to, the implied
|
|
||||||
// warranties of merchantability and fitness for a particular purpose are disclaimed.
|
|
||||||
// In no event shall the Intel Corporation or contributors be liable for any direct,
|
|
||||||
// indirect, incidental, special, exemplary, or consequential damages
|
|
||||||
// (including, but not limited to, procurement of substitute goods or services;
|
|
||||||
// loss of use, data, or profits; or business interruption) however caused
|
|
||||||
// and on any theory of liability, whether in contract, strict liability,
|
|
||||||
// or tort (including negligence or otherwise) arising in any way out of
|
|
||||||
// the use of this software, even if advised of the possibility of such damage.
|
|
||||||
//
|
|
||||||
//M*/
|
|
||||||
|
|
||||||
#include "test_precomp.hpp"
|
#include "test_precomp.hpp"
|
||||||
|
|
||||||
namespace opencv_test {
|
namespace opencv_test { namespace {
|
||||||
|
|
||||||
CV_SLMLTest::CV_SLMLTest( const char* _modelName ) : CV_MLBaseTest( _modelName )
|
|
||||||
|
void randomFillCategories(const string & filename, Mat & input)
|
||||||
{
|
{
|
||||||
validationFN = "slvalidation.xml";
|
Mat catMap;
|
||||||
|
Mat catCount;
|
||||||
|
std::vector<uchar> varTypes;
|
||||||
|
|
||||||
|
FileStorage fs(filename, FileStorage::READ);
|
||||||
|
FileNode root = fs.getFirstTopLevelNode();
|
||||||
|
root["cat_map"] >> catMap;
|
||||||
|
root["cat_count"] >> catCount;
|
||||||
|
root["var_type"] >> varTypes;
|
||||||
|
|
||||||
|
int offset = 0;
|
||||||
|
int countOffset = 0;
|
||||||
|
uint var = 0, varCount = (uint)varTypes.size();
|
||||||
|
for (; var < varCount; ++var)
|
||||||
|
{
|
||||||
|
if (varTypes[var] == ml::VAR_CATEGORICAL)
|
||||||
|
{
|
||||||
|
int size = catCount.at<int>(0, countOffset);
|
||||||
|
for (int row = 0; row < input.rows; ++row)
|
||||||
|
{
|
||||||
|
int randomChosenIndex = offset + ((uint)cv::theRNG()) % size;
|
||||||
|
int value = catMap.at<int>(0, randomChosenIndex);
|
||||||
|
input.at<float>(row, var) = (float)value;
|
||||||
|
}
|
||||||
|
offset += size;
|
||||||
|
++countOffset;
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
int CV_SLMLTest::run_test_case( int testCaseIdx )
|
//==================================================================================================
|
||||||
{
|
|
||||||
int code = cvtest::TS::OK;
|
|
||||||
code = prepare_test_case( testCaseIdx );
|
|
||||||
|
|
||||||
if( code == cvtest::TS::OK )
|
typedef tuple<string, string> ML_Legacy_Param;
|
||||||
{
|
typedef testing::TestWithParam< ML_Legacy_Param > ML_Legacy_Params;
|
||||||
data->setTrainTestSplit(data->getNTrainSamples(), true);
|
|
||||||
code = train( testCaseIdx );
|
TEST_P(ML_Legacy_Params, legacy_load)
|
||||||
if( code == cvtest::TS::OK )
|
{
|
||||||
{
|
const string modelName = get<0>(GetParam());
|
||||||
get_test_error( testCaseIdx, &test_resps1 );
|
const string dataName = get<1>(GetParam());
|
||||||
fname1 = tempfile(".json.gz");
|
const string filename = findDataFile("legacy/" + modelName + "_" + dataName + ".xml");
|
||||||
save( (fname1 + "?base64").c_str() );
|
const bool isTree = modelName == CV_BOOST || modelName == CV_DTREE || modelName == CV_RTREES;
|
||||||
load( fname1.c_str() );
|
|
||||||
get_test_error( testCaseIdx, &test_resps2 );
|
Ptr<StatModel> model;
|
||||||
fname2 = tempfile(".json.gz");
|
if (modelName == CV_BOOST)
|
||||||
save( (fname2 + "?base64").c_str() );
|
model = Algorithm::load<Boost>(filename);
|
||||||
}
|
else if (modelName == CV_ANN)
|
||||||
else
|
model = Algorithm::load<ANN_MLP>(filename);
|
||||||
ts->printf( cvtest::TS::LOG, "model can not be trained" );
|
else if (modelName == CV_DTREE)
|
||||||
}
|
model = Algorithm::load<DTrees>(filename);
|
||||||
return code;
|
else if (modelName == CV_NBAYES)
|
||||||
|
model = Algorithm::load<NormalBayesClassifier>(filename);
|
||||||
|
else if (modelName == CV_SVM)
|
||||||
|
model = Algorithm::load<SVM>(filename);
|
||||||
|
else if (modelName == CV_RTREES)
|
||||||
|
model = Algorithm::load<RTrees>(filename);
|
||||||
|
else if (modelName == CV_SVMSGD)
|
||||||
|
model = Algorithm::load<SVMSGD>(filename);
|
||||||
|
ASSERT_TRUE(model);
|
||||||
|
|
||||||
|
Mat input = Mat(isTree ? 10 : 1, model->getVarCount(), CV_32F);
|
||||||
|
cv::theRNG().fill(input, RNG::UNIFORM, 0, 40);
|
||||||
|
|
||||||
|
if (isTree)
|
||||||
|
randomFillCategories(filename, input);
|
||||||
|
|
||||||
|
Mat output;
|
||||||
|
EXPECT_NO_THROW(model->predict(input, output, StatModel::RAW_OUTPUT | (isTree ? DTrees::PREDICT_SUM : 0)));
|
||||||
|
// just check if no internal assertions or errors thrown
|
||||||
}
|
}
|
||||||
|
|
||||||
int CV_SLMLTest::validate_test_results( int testCaseIdx )
|
ML_Legacy_Param param_list[] = {
|
||||||
{
|
ML_Legacy_Param(CV_ANN, "waveform"),
|
||||||
int code = cvtest::TS::OK;
|
ML_Legacy_Param(CV_BOOST, "adult"),
|
||||||
|
ML_Legacy_Param(CV_BOOST, "1"),
|
||||||
// 1. compare files
|
ML_Legacy_Param(CV_BOOST, "2"),
|
||||||
FILE *fs1 = fopen(fname1.c_str(), "rb"), *fs2 = fopen(fname2.c_str(), "rb");
|
ML_Legacy_Param(CV_BOOST, "3"),
|
||||||
size_t sz1 = 0, sz2 = 0;
|
ML_Legacy_Param(CV_DTREE, "abalone"),
|
||||||
if( !fs1 || !fs2 )
|
ML_Legacy_Param(CV_DTREE, "mushroom"),
|
||||||
code = cvtest::TS::FAIL_MISSING_TEST_DATA;
|
ML_Legacy_Param(CV_NBAYES, "waveform"),
|
||||||
if( code >= 0 )
|
ML_Legacy_Param(CV_SVM, "poletelecomm"),
|
||||||
{
|
ML_Legacy_Param(CV_SVM, "waveform"),
|
||||||
fseek(fs1, 0, SEEK_END); fseek(fs2, 0, SEEK_END);
|
ML_Legacy_Param(CV_RTREES, "waveform"),
|
||||||
sz1 = ftell(fs1);
|
ML_Legacy_Param(CV_SVMSGD, "waveform"),
|
||||||
sz2 = ftell(fs2);
|
|
||||||
fseek(fs1, 0, SEEK_SET); fseek(fs2, 0, SEEK_SET);
|
|
||||||
}
|
|
||||||
|
|
||||||
if( sz1 != sz2 )
|
|
||||||
code = cvtest::TS::FAIL_INVALID_OUTPUT;
|
|
||||||
|
|
||||||
if( code >= 0 )
|
|
||||||
{
|
|
||||||
const int BUFSZ = 1024;
|
|
||||||
uchar buf1[BUFSZ], buf2[BUFSZ];
|
|
||||||
for( size_t pos = 0; pos < sz1; )
|
|
||||||
{
|
|
||||||
size_t r1 = fread(buf1, 1, BUFSZ, fs1);
|
|
||||||
size_t r2 = fread(buf2, 1, BUFSZ, fs2);
|
|
||||||
if( r1 != r2 || memcmp(buf1, buf2, r1) != 0 )
|
|
||||||
{
|
|
||||||
ts->printf( cvtest::TS::LOG,
|
|
||||||
"in test case %d first (%s) and second (%s) saved files differ in %d-th kb\n",
|
|
||||||
testCaseIdx, fname1.c_str(), fname2.c_str(),
|
|
||||||
(int)pos );
|
|
||||||
code = cvtest::TS::FAIL_INVALID_OUTPUT;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
pos += r1;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if(fs1)
|
|
||||||
fclose(fs1);
|
|
||||||
if(fs2)
|
|
||||||
fclose(fs2);
|
|
||||||
|
|
||||||
// delete temporary files
|
|
||||||
if( code >= 0 )
|
|
||||||
{
|
|
||||||
remove( fname1.c_str() );
|
|
||||||
remove( fname2.c_str() );
|
|
||||||
}
|
|
||||||
|
|
||||||
if( code >= 0 )
|
|
||||||
{
|
|
||||||
// 2. compare responses
|
|
||||||
CV_Assert( test_resps1.size() == test_resps2.size() );
|
|
||||||
vector<float>::const_iterator it1 = test_resps1.begin(), it2 = test_resps2.begin();
|
|
||||||
for( ; it1 != test_resps1.end(); ++it1, ++it2 )
|
|
||||||
{
|
|
||||||
if( fabs(*it1 - *it2) > FLT_EPSILON )
|
|
||||||
{
|
|
||||||
ts->printf( cvtest::TS::LOG, "in test case %d responses predicted before saving and after loading is different", testCaseIdx );
|
|
||||||
code = cvtest::TS::FAIL_INVALID_OUTPUT;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return code;
|
|
||||||
}
|
|
||||||
|
|
||||||
namespace {
|
|
||||||
|
|
||||||
TEST(ML_NaiveBayes, save_load) { CV_SLMLTest test( CV_NBAYES ); test.safe_run(); }
|
|
||||||
TEST(ML_KNearest, save_load) { CV_SLMLTest test( CV_KNEAREST ); test.safe_run(); }
|
|
||||||
TEST(ML_SVM, save_load) { CV_SLMLTest test( CV_SVM ); test.safe_run(); }
|
|
||||||
TEST(ML_ANN, save_load) { CV_SLMLTest test( CV_ANN ); test.safe_run(); }
|
|
||||||
TEST(ML_DTree, save_load) { CV_SLMLTest test( CV_DTREE ); test.safe_run(); }
|
|
||||||
TEST(ML_Boost, save_load) { CV_SLMLTest test( CV_BOOST ); test.safe_run(); }
|
|
||||||
TEST(ML_RTrees, save_load) { CV_SLMLTest test( CV_RTREES ); test.safe_run(); }
|
|
||||||
TEST(DISABLED_ML_ERTrees, save_load) { CV_SLMLTest test( CV_ERTREES ); test.safe_run(); }
|
|
||||||
TEST(MV_SVMSGD, save_load){ CV_SLMLTest test( CV_SVMSGD ); test.safe_run(); }
|
|
||||||
|
|
||||||
class CV_LegacyTest : public cvtest::BaseTest
|
|
||||||
{
|
|
||||||
public:
|
|
||||||
CV_LegacyTest(const std::string &_modelName, const std::string &_suffixes = std::string())
|
|
||||||
: cvtest::BaseTest(), modelName(_modelName), suffixes(_suffixes)
|
|
||||||
{
|
|
||||||
}
|
|
||||||
virtual ~CV_LegacyTest() {}
|
|
||||||
protected:
|
|
||||||
void run(int)
|
|
||||||
{
|
|
||||||
unsigned int idx = 0;
|
|
||||||
for (;;)
|
|
||||||
{
|
|
||||||
if (idx >= suffixes.size())
|
|
||||||
break;
|
|
||||||
int found = (int)suffixes.find(';', idx);
|
|
||||||
string piece = suffixes.substr(idx, found - idx);
|
|
||||||
if (piece.empty())
|
|
||||||
break;
|
|
||||||
oneTest(piece);
|
|
||||||
idx += (unsigned int)piece.size() + 1;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
void oneTest(const string & suffix)
|
|
||||||
{
|
|
||||||
using namespace cv::ml;
|
|
||||||
|
|
||||||
int code = cvtest::TS::OK;
|
|
||||||
string filename = ts->get_data_path() + "legacy/" + modelName + suffix;
|
|
||||||
bool isTree = modelName == CV_BOOST || modelName == CV_DTREE || modelName == CV_RTREES;
|
|
||||||
Ptr<StatModel> model;
|
|
||||||
if (modelName == CV_BOOST)
|
|
||||||
model = Algorithm::load<Boost>(filename);
|
|
||||||
else if (modelName == CV_ANN)
|
|
||||||
model = Algorithm::load<ANN_MLP>(filename);
|
|
||||||
else if (modelName == CV_DTREE)
|
|
||||||
model = Algorithm::load<DTrees>(filename);
|
|
||||||
else if (modelName == CV_NBAYES)
|
|
||||||
model = Algorithm::load<NormalBayesClassifier>(filename);
|
|
||||||
else if (modelName == CV_SVM)
|
|
||||||
model = Algorithm::load<SVM>(filename);
|
|
||||||
else if (modelName == CV_RTREES)
|
|
||||||
model = Algorithm::load<RTrees>(filename);
|
|
||||||
else if (modelName == CV_SVMSGD)
|
|
||||||
model = Algorithm::load<SVMSGD>(filename);
|
|
||||||
if (!model)
|
|
||||||
{
|
|
||||||
code = cvtest::TS::FAIL_INVALID_TEST_DATA;
|
|
||||||
}
|
|
||||||
else
|
|
||||||
{
|
|
||||||
Mat input = Mat(isTree ? 10 : 1, model->getVarCount(), CV_32F);
|
|
||||||
ts->get_rng().fill(input, RNG::UNIFORM, 0, 40);
|
|
||||||
|
|
||||||
if (isTree)
|
|
||||||
randomFillCategories(filename, input);
|
|
||||||
|
|
||||||
Mat output;
|
|
||||||
model->predict(input, output, StatModel::RAW_OUTPUT | (isTree ? DTrees::PREDICT_SUM : 0));
|
|
||||||
// just check if no internal assertions or errors thrown
|
|
||||||
}
|
|
||||||
ts->set_failed_test_info(code);
|
|
||||||
}
|
|
||||||
void randomFillCategories(const string & filename, Mat & input)
|
|
||||||
{
|
|
||||||
Mat catMap;
|
|
||||||
Mat catCount;
|
|
||||||
std::vector<uchar> varTypes;
|
|
||||||
|
|
||||||
FileStorage fs(filename, FileStorage::READ);
|
|
||||||
FileNode root = fs.getFirstTopLevelNode();
|
|
||||||
root["cat_map"] >> catMap;
|
|
||||||
root["cat_count"] >> catCount;
|
|
||||||
root["var_type"] >> varTypes;
|
|
||||||
|
|
||||||
int offset = 0;
|
|
||||||
int countOffset = 0;
|
|
||||||
uint var = 0, varCount = (uint)varTypes.size();
|
|
||||||
for (; var < varCount; ++var)
|
|
||||||
{
|
|
||||||
if (varTypes[var] == ml::VAR_CATEGORICAL)
|
|
||||||
{
|
|
||||||
int size = catCount.at<int>(0, countOffset);
|
|
||||||
for (int row = 0; row < input.rows; ++row)
|
|
||||||
{
|
|
||||||
int randomChosenIndex = offset + ((uint)ts->get_rng()) % size;
|
|
||||||
int value = catMap.at<int>(0, randomChosenIndex);
|
|
||||||
input.at<float>(row, var) = (float)value;
|
|
||||||
}
|
|
||||||
offset += size;
|
|
||||||
++countOffset;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
string modelName;
|
|
||||||
string suffixes;
|
|
||||||
};
|
};
|
||||||
|
|
||||||
TEST(ML_ANN, legacy_load) { CV_LegacyTest test(CV_ANN, "_waveform.xml"); test.safe_run(); }
|
INSTANTIATE_TEST_CASE_P(/**/, ML_Legacy_Params, testing::ValuesIn(param_list));
|
||||||
TEST(ML_Boost, legacy_load) { CV_LegacyTest test(CV_BOOST, "_adult.xml;_1.xml;_2.xml;_3.xml"); test.safe_run(); }
|
|
||||||
TEST(ML_DTree, legacy_load) { CV_LegacyTest test(CV_DTREE, "_abalone.xml;_mushroom.xml"); test.safe_run(); }
|
|
||||||
TEST(ML_NBayes, legacy_load) { CV_LegacyTest test(CV_NBAYES, "_waveform.xml"); test.safe_run(); }
|
|
||||||
TEST(ML_SVM, legacy_load) { CV_LegacyTest test(CV_SVM, "_poletelecomm.xml;_waveform.xml"); test.safe_run(); }
|
|
||||||
TEST(ML_RTrees, legacy_load) { CV_LegacyTest test(CV_RTREES, "_waveform.xml"); test.safe_run(); }
|
|
||||||
TEST(ML_SVMSGD, legacy_load) { CV_LegacyTest test(CV_SVMSGD, "_waveform.xml"); test.safe_run(); }
|
|
||||||
|
|
||||||
/*TEST(ML_SVM, throw_exception_when_save_untrained_model)
|
/*TEST(ML_SVM, throw_exception_when_save_untrained_model)
|
||||||
{
|
{
|
||||||
@ -271,33 +104,4 @@ TEST(ML_SVMSGD, legacy_load) { CV_LegacyTest test(CV_SVMSGD, "_waveform.xml"); t
|
|||||||
remove(filename.c_str());
|
remove(filename.c_str());
|
||||||
}*/
|
}*/
|
||||||
|
|
||||||
TEST(DISABLED_ML_SVM, linear_save_load)
|
|
||||||
{
|
|
||||||
Ptr<cv::ml::SVM> svm1, svm2, svm3;
|
|
||||||
|
|
||||||
svm1 = Algorithm::load<SVM>("SVM45_X_38-1.xml");
|
|
||||||
svm2 = Algorithm::load<SVM>("SVM45_X_38-2.xml");
|
|
||||||
string tname = tempfile("a.json");
|
|
||||||
svm2->save(tname + "?base64");
|
|
||||||
svm3 = Algorithm::load<SVM>(tname);
|
|
||||||
|
|
||||||
ASSERT_EQ(svm1->getVarCount(), svm2->getVarCount());
|
|
||||||
ASSERT_EQ(svm1->getVarCount(), svm3->getVarCount());
|
|
||||||
|
|
||||||
int m = 10000, n = svm1->getVarCount();
|
|
||||||
Mat samples(m, n, CV_32F), r1, r2, r3;
|
|
||||||
randu(samples, 0., 1.);
|
|
||||||
|
|
||||||
svm1->predict(samples, r1);
|
|
||||||
svm2->predict(samples, r2);
|
|
||||||
svm3->predict(samples, r3);
|
|
||||||
|
|
||||||
double eps = 1e-4;
|
|
||||||
EXPECT_LE(cvtest::norm(r1, r2, NORM_INF), eps);
|
|
||||||
EXPECT_LE(cvtest::norm(r1, r3, NORM_INF), eps);
|
|
||||||
|
|
||||||
remove(tname.c_str());
|
|
||||||
}
|
|
||||||
|
|
||||||
}} // namespace
|
}} // namespace
|
||||||
/* End of file. */
|
|
||||||
|
@ -1,281 +1,119 @@
|
|||||||
/*M///////////////////////////////////////////////////////////////////////////////////////
|
// This file is part of OpenCV project.
|
||||||
//
|
// It is subject to the license terms in the LICENSE file found in the top-level directory
|
||||||
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
|
// of this distribution and at http://opencv.org/license.html.
|
||||||
//
|
|
||||||
// By downloading, copying, installing or using the software you agree to this license.
|
|
||||||
// If you do not agree to this license, do not download, install,
|
|
||||||
// copy or use the software.
|
|
||||||
//
|
|
||||||
//
|
|
||||||
// Intel License Agreement
|
|
||||||
// For Open Source Computer Vision Library
|
|
||||||
//
|
|
||||||
// Copyright (C) 2000, Intel Corporation, all rights reserved.
|
|
||||||
// Third party copyrights are property of their respective owners.
|
|
||||||
//
|
|
||||||
// Redistribution and use in source and binary forms, with or without modification,
|
|
||||||
// are permitted provided that the following conditions are met:
|
|
||||||
//
|
|
||||||
// * Redistribution's of source code must retain the above copyright notice,
|
|
||||||
// this list of conditions and the following disclaimer.
|
|
||||||
//
|
|
||||||
// * Redistribution's in binary form must reproduce the above copyright notice,
|
|
||||||
// this list of conditions and the following disclaimer in the documentation
|
|
||||||
// and/or other materials provided with the distribution.
|
|
||||||
//
|
|
||||||
// * The name of Intel Corporation may not be used to endorse or promote products
|
|
||||||
// derived from this software without specific prior written permission.
|
|
||||||
//
|
|
||||||
// This software is provided by the copyright holders and contributors "as is" and
|
|
||||||
// any express or implied warranties, including, but not limited to, the implied
|
|
||||||
// warranties of merchantability and fitness for a particular purpose are disclaimed.
|
|
||||||
// In no event shall the Intel Corporation or contributors be liable for any direct,
|
|
||||||
// indirect, incidental, special, exemplary, or consequential damages
|
|
||||||
// (including, but not limited to, procurement of substitute goods or services;
|
|
||||||
// loss of use, data, or profits; or business interruption) however caused
|
|
||||||
// and on any theory of liability, whether in contract, strict liability,
|
|
||||||
// or tort (including negligence or otherwise) arising in any way out of
|
|
||||||
// the use of this software, even if advised of the possibility of such damage.
|
|
||||||
//
|
|
||||||
//M*/
|
|
||||||
|
|
||||||
#include "test_precomp.hpp"
|
#include "test_precomp.hpp"
|
||||||
|
|
||||||
namespace opencv_test { namespace {
|
namespace opencv_test { namespace {
|
||||||
|
|
||||||
using cv::ml::SVMSGD;
|
static const int TEST_VALUE_LIMIT = 500;
|
||||||
using cv::ml::TrainData;
|
enum
|
||||||
|
|
||||||
class CV_SVMSGDTrainTest : public cvtest::BaseTest
|
|
||||||
{
|
{
|
||||||
public:
|
UNIFORM_SAME_SCALE,
|
||||||
enum TrainDataType
|
UNIFORM_DIFFERENT_SCALES
|
||||||
{
|
|
||||||
UNIFORM_SAME_SCALE,
|
|
||||||
UNIFORM_DIFFERENT_SCALES
|
|
||||||
};
|
|
||||||
|
|
||||||
CV_SVMSGDTrainTest(const Mat &_weights, float shift, TrainDataType type, double precision = 0.01);
|
|
||||||
private:
|
|
||||||
virtual void run( int start_from );
|
|
||||||
static float decisionFunction(const Mat &sample, const Mat &weights, float shift);
|
|
||||||
void makeData(int samplesCount, const Mat &weights, float shift, RNG &rng, Mat &samples, Mat & responses);
|
|
||||||
void generateSameBorders(int featureCount);
|
|
||||||
void generateDifferentBorders(int featureCount);
|
|
||||||
|
|
||||||
TrainDataType type;
|
|
||||||
double precision;
|
|
||||||
std::vector<std::pair<float,float> > borders;
|
|
||||||
cv::Ptr<TrainData> data;
|
|
||||||
cv::Mat testSamples;
|
|
||||||
cv::Mat testResponses;
|
|
||||||
static const int TEST_VALUE_LIMIT = 500;
|
|
||||||
};
|
};
|
||||||
|
|
||||||
void CV_SVMSGDTrainTest::generateSameBorders(int featureCount)
|
CV_ENUM(SVMSGD_TYPE, UNIFORM_SAME_SCALE, UNIFORM_DIFFERENT_SCALES)
|
||||||
{
|
|
||||||
float lowerLimit = -TEST_VALUE_LIMIT;
|
|
||||||
float upperLimit = TEST_VALUE_LIMIT;
|
|
||||||
|
|
||||||
for (int featureIndex = 0; featureIndex < featureCount; featureIndex++)
|
typedef std::vector< std::pair<float,float> > BorderList;
|
||||||
{
|
|
||||||
borders.push_back(std::pair<float,float>(lowerLimit, upperLimit));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
void CV_SVMSGDTrainTest::generateDifferentBorders(int featureCount)
|
static void makeData(RNG &rng, int samplesCount, const Mat &weights, float shift, const BorderList & borders, Mat &samples, Mat & responses)
|
||||||
{
|
|
||||||
float lowerLimit = -TEST_VALUE_LIMIT;
|
|
||||||
float upperLimit = TEST_VALUE_LIMIT;
|
|
||||||
cv::RNG rng(0);
|
|
||||||
|
|
||||||
for (int featureIndex = 0; featureIndex < featureCount; featureIndex++)
|
|
||||||
{
|
|
||||||
int crit = rng.uniform(0, 2);
|
|
||||||
|
|
||||||
if (crit > 0)
|
|
||||||
{
|
|
||||||
borders.push_back(std::pair<float,float>(lowerLimit, upperLimit));
|
|
||||||
}
|
|
||||||
else
|
|
||||||
{
|
|
||||||
borders.push_back(std::pair<float,float>(lowerLimit/1000, upperLimit/1000));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
float CV_SVMSGDTrainTest::decisionFunction(const Mat &sample, const Mat &weights, float shift)
|
|
||||||
{
|
|
||||||
return static_cast<float>(sample.dot(weights)) + shift;
|
|
||||||
}
|
|
||||||
|
|
||||||
void CV_SVMSGDTrainTest::makeData(int samplesCount, const Mat &weights, float shift, RNG &rng, Mat &samples, Mat & responses)
|
|
||||||
{
|
{
|
||||||
int featureCount = weights.cols;
|
int featureCount = weights.cols;
|
||||||
|
|
||||||
samples.create(samplesCount, featureCount, CV_32FC1);
|
samples.create(samplesCount, featureCount, CV_32FC1);
|
||||||
for (int featureIndex = 0; featureIndex < featureCount; featureIndex++)
|
for (int featureIndex = 0; featureIndex < featureCount; featureIndex++)
|
||||||
{
|
|
||||||
rng.fill(samples.col(featureIndex), RNG::UNIFORM, borders[featureIndex].first, borders[featureIndex].second);
|
rng.fill(samples.col(featureIndex), RNG::UNIFORM, borders[featureIndex].first, borders[featureIndex].second);
|
||||||
}
|
|
||||||
|
|
||||||
responses.create(samplesCount, 1, CV_32FC1);
|
responses.create(samplesCount, 1, CV_32FC1);
|
||||||
|
|
||||||
for (int i = 0 ; i < samplesCount; i++)
|
for (int i = 0 ; i < samplesCount; i++)
|
||||||
{
|
{
|
||||||
responses.at<float>(i) = decisionFunction(samples.row(i), weights, shift) > 0 ? 1.f : -1.f;
|
double res = samples.row(i).dot(weights) + shift;
|
||||||
|
responses.at<float>(i) = res > 0 ? 1.f : -1.f;
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
CV_SVMSGDTrainTest::CV_SVMSGDTrainTest(const Mat &weights, float shift, TrainDataType _type, double _precision)
|
//==================================================================================================
|
||||||
|
|
||||||
|
typedef tuple<SVMSGD_TYPE, int, double> ML_SVMSGD_Param;
|
||||||
|
typedef testing::TestWithParam<ML_SVMSGD_Param> ML_SVMSGD_Params;
|
||||||
|
|
||||||
|
TEST_P(ML_SVMSGD_Params, scale_and_features)
|
||||||
{
|
{
|
||||||
type = _type;
|
const int type = get<0>(GetParam());
|
||||||
precision = _precision;
|
const int featureCount = get<1>(GetParam());
|
||||||
|
const double precision = get<2>(GetParam());
|
||||||
|
|
||||||
int featureCount = weights.cols;
|
RNG &rng = cv::theRNG();
|
||||||
|
|
||||||
switch(type)
|
Mat_<float> weights(1, featureCount);
|
||||||
|
rng.fill(weights, RNG::UNIFORM, -1, 1);
|
||||||
|
const float shift = static_cast<float>(rng.uniform(-featureCount, featureCount));
|
||||||
|
|
||||||
|
BorderList borders;
|
||||||
|
float lowerLimit = -TEST_VALUE_LIMIT;
|
||||||
|
float upperLimit = TEST_VALUE_LIMIT;
|
||||||
|
if (type == UNIFORM_SAME_SCALE)
|
||||||
{
|
{
|
||||||
case UNIFORM_SAME_SCALE:
|
for (int featureIndex = 0; featureIndex < featureCount; featureIndex++)
|
||||||
generateSameBorders(featureCount);
|
borders.push_back(std::pair<float,float>(lowerLimit, upperLimit));
|
||||||
break;
|
|
||||||
case UNIFORM_DIFFERENT_SCALES:
|
|
||||||
generateDifferentBorders(featureCount);
|
|
||||||
break;
|
|
||||||
default:
|
|
||||||
CV_Error(CV_StsBadArg, "Unknown train data type");
|
|
||||||
}
|
}
|
||||||
|
else if (type == UNIFORM_DIFFERENT_SCALES)
|
||||||
RNG rng(0);
|
{
|
||||||
|
for (int featureIndex = 0; featureIndex < featureCount; featureIndex++)
|
||||||
|
{
|
||||||
|
int crit = rng.uniform(0, 2);
|
||||||
|
if (crit > 0)
|
||||||
|
borders.push_back(std::pair<float,float>(lowerLimit, upperLimit));
|
||||||
|
else
|
||||||
|
borders.push_back(std::pair<float,float>(lowerLimit/1000, upperLimit/1000));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
ASSERT_FALSE(borders.empty());
|
||||||
|
|
||||||
Mat trainSamples;
|
Mat trainSamples;
|
||||||
Mat trainResponses;
|
Mat trainResponses;
|
||||||
int trainSamplesCount = 10000;
|
int trainSamplesCount = 10000;
|
||||||
makeData(trainSamplesCount, weights, shift, rng, trainSamples, trainResponses);
|
makeData(rng, trainSamplesCount, weights, shift, borders, trainSamples, trainResponses);
|
||||||
data = TrainData::create(trainSamples, cv::ml::ROW_SAMPLE, trainResponses);
|
ASSERT_EQ(trainResponses.type(), CV_32FC1);
|
||||||
|
|
||||||
|
Mat testSamples;
|
||||||
|
Mat testResponses;
|
||||||
int testSamplesCount = 100000;
|
int testSamplesCount = 100000;
|
||||||
makeData(testSamplesCount, weights, shift, rng, testSamples, testResponses);
|
makeData(rng, testSamplesCount, weights, shift, borders, testSamples, testResponses);
|
||||||
}
|
ASSERT_EQ(testResponses.type(), CV_32FC1);
|
||||||
|
|
||||||
|
Ptr<TrainData> data = TrainData::create(trainSamples, cv::ml::ROW_SAMPLE, trainResponses);
|
||||||
|
ASSERT_TRUE(data);
|
||||||
|
|
||||||
void CV_SVMSGDTrainTest::run( int /*start_from*/ )
|
|
||||||
{
|
|
||||||
cv::Ptr<SVMSGD> svmsgd = SVMSGD::create();
|
cv::Ptr<SVMSGD> svmsgd = SVMSGD::create();
|
||||||
|
ASSERT_TRUE(svmsgd);
|
||||||
|
|
||||||
svmsgd->train(data);
|
svmsgd->train(data);
|
||||||
|
|
||||||
Mat responses;
|
Mat responses;
|
||||||
|
|
||||||
svmsgd->predict(testSamples, responses);
|
svmsgd->predict(testSamples, responses);
|
||||||
|
ASSERT_EQ(responses.type(), CV_32FC1);
|
||||||
|
ASSERT_EQ(responses.rows, testSamplesCount);
|
||||||
|
|
||||||
int errCount = 0;
|
int errCount = 0;
|
||||||
int testSamplesCount = testSamples.rows;
|
|
||||||
|
|
||||||
CV_Assert((responses.type() == CV_32FC1) && (testResponses.type() == CV_32FC1));
|
|
||||||
for (int i = 0; i < testSamplesCount; i++)
|
for (int i = 0; i < testSamplesCount; i++)
|
||||||
{
|
|
||||||
if (responses.at<float>(i) * testResponses.at<float>(i) < 0)
|
if (responses.at<float>(i) * testResponses.at<float>(i) < 0)
|
||||||
errCount++;
|
errCount++;
|
||||||
}
|
|
||||||
|
|
||||||
float err = (float)errCount / testSamplesCount;
|
float err = (float)errCount / testSamplesCount;
|
||||||
|
EXPECT_LE(err, precision);
|
||||||
if ( err > precision )
|
|
||||||
{
|
|
||||||
ts->set_failed_test_info(cvtest::TS::FAIL_BAD_ACCURACY);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
void makeWeightsAndShift(int featureCount, Mat &weights, float &shift)
|
ML_SVMSGD_Param params_list[] = {
|
||||||
{
|
ML_SVMSGD_Param(UNIFORM_SAME_SCALE, 2, 0.01),
|
||||||
weights.create(1, featureCount, CV_32FC1);
|
ML_SVMSGD_Param(UNIFORM_SAME_SCALE, 5, 0.01),
|
||||||
cv::RNG rng(0);
|
ML_SVMSGD_Param(UNIFORM_SAME_SCALE, 100, 0.02),
|
||||||
double lowerLimit = -1;
|
ML_SVMSGD_Param(UNIFORM_DIFFERENT_SCALES, 2, 0.01),
|
||||||
double upperLimit = 1;
|
ML_SVMSGD_Param(UNIFORM_DIFFERENT_SCALES, 5, 0.01),
|
||||||
|
ML_SVMSGD_Param(UNIFORM_DIFFERENT_SCALES, 100, 0.01),
|
||||||
|
};
|
||||||
|
|
||||||
rng.fill(weights, RNG::UNIFORM, lowerLimit, upperLimit);
|
INSTANTIATE_TEST_CASE_P(/**/, ML_SVMSGD_Params, testing::ValuesIn(params_list));
|
||||||
shift = static_cast<float>(rng.uniform(-featureCount, featureCount));
|
|
||||||
}
|
|
||||||
|
|
||||||
|
//==================================================================================================
|
||||||
TEST(ML_SVMSGD, trainSameScale2)
|
|
||||||
{
|
|
||||||
int featureCount = 2;
|
|
||||||
|
|
||||||
Mat weights;
|
|
||||||
|
|
||||||
float shift = 0;
|
|
||||||
makeWeightsAndShift(featureCount, weights, shift);
|
|
||||||
|
|
||||||
CV_SVMSGDTrainTest test(weights, shift, CV_SVMSGDTrainTest::UNIFORM_SAME_SCALE);
|
|
||||||
test.safe_run();
|
|
||||||
}
|
|
||||||
|
|
||||||
TEST(ML_SVMSGD, trainSameScale5)
|
|
||||||
{
|
|
||||||
int featureCount = 5;
|
|
||||||
|
|
||||||
Mat weights;
|
|
||||||
|
|
||||||
float shift = 0;
|
|
||||||
makeWeightsAndShift(featureCount, weights, shift);
|
|
||||||
|
|
||||||
CV_SVMSGDTrainTest test(weights, shift, CV_SVMSGDTrainTest::UNIFORM_SAME_SCALE);
|
|
||||||
test.safe_run();
|
|
||||||
}
|
|
||||||
|
|
||||||
TEST(ML_SVMSGD, trainSameScale100)
|
|
||||||
{
|
|
||||||
int featureCount = 100;
|
|
||||||
|
|
||||||
Mat weights;
|
|
||||||
|
|
||||||
float shift = 0;
|
|
||||||
makeWeightsAndShift(featureCount, weights, shift);
|
|
||||||
|
|
||||||
CV_SVMSGDTrainTest test(weights, shift, CV_SVMSGDTrainTest::UNIFORM_SAME_SCALE, 0.02);
|
|
||||||
test.safe_run();
|
|
||||||
}
|
|
||||||
|
|
||||||
TEST(ML_SVMSGD, trainDifferentScales2)
|
|
||||||
{
|
|
||||||
int featureCount = 2;
|
|
||||||
|
|
||||||
Mat weights;
|
|
||||||
|
|
||||||
float shift = 0;
|
|
||||||
makeWeightsAndShift(featureCount, weights, shift);
|
|
||||||
|
|
||||||
CV_SVMSGDTrainTest test(weights, shift, CV_SVMSGDTrainTest::UNIFORM_DIFFERENT_SCALES, 0.01);
|
|
||||||
test.safe_run();
|
|
||||||
}
|
|
||||||
|
|
||||||
TEST(ML_SVMSGD, trainDifferentScales5)
|
|
||||||
{
|
|
||||||
int featureCount = 5;
|
|
||||||
|
|
||||||
Mat weights;
|
|
||||||
|
|
||||||
float shift = 0;
|
|
||||||
makeWeightsAndShift(featureCount, weights, shift);
|
|
||||||
|
|
||||||
CV_SVMSGDTrainTest test(weights, shift, CV_SVMSGDTrainTest::UNIFORM_DIFFERENT_SCALES, 0.01);
|
|
||||||
test.safe_run();
|
|
||||||
}
|
|
||||||
|
|
||||||
TEST(ML_SVMSGD, trainDifferentScales100)
|
|
||||||
{
|
|
||||||
int featureCount = 100;
|
|
||||||
|
|
||||||
Mat weights;
|
|
||||||
|
|
||||||
float shift = 0;
|
|
||||||
makeWeightsAndShift(featureCount, weights, shift);
|
|
||||||
|
|
||||||
CV_SVMSGDTrainTest test(weights, shift, CV_SVMSGDTrainTest::UNIFORM_DIFFERENT_SCALES, 0.01);
|
|
||||||
test.safe_run();
|
|
||||||
}
|
|
||||||
|
|
||||||
TEST(ML_SVMSGD, twoPoints)
|
TEST(ML_SVMSGD, twoPoints)
|
||||||
{
|
{
|
||||||
|
@ -1,43 +1,6 @@
|
|||||||
/*M///////////////////////////////////////////////////////////////////////////////////////
|
// This file is part of OpenCV project.
|
||||||
//
|
// It is subject to the license terms in the LICENSE file found in the top-level directory
|
||||||
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
|
// of this distribution and at http://opencv.org/license.html.
|
||||||
//
|
|
||||||
// By downloading, copying, installing or using the software you agree to this license.
|
|
||||||
// If you do not agree to this license, do not download, install,
|
|
||||||
// copy or use the software.
|
|
||||||
//
|
|
||||||
//
|
|
||||||
// Intel License Agreement
|
|
||||||
// For Open Source Computer Vision Library
|
|
||||||
//
|
|
||||||
// Copyright (C) 2000, Intel Corporation, all rights reserved.
|
|
||||||
// Third party copyrights are property of their respective owners.
|
|
||||||
//
|
|
||||||
// Redistribution and use in source and binary forms, with or without modification,
|
|
||||||
// are permitted provided that the following conditions are met:
|
|
||||||
//
|
|
||||||
// * Redistribution's of source code must retain the above copyright notice,
|
|
||||||
// this list of conditions and the following disclaimer.
|
|
||||||
//
|
|
||||||
// * Redistribution's in binary form must reproduce the above copyright notice,
|
|
||||||
// this list of conditions and the following disclaimer in the documentation
|
|
||||||
// and/or other materials provided with the distribution.
|
|
||||||
//
|
|
||||||
// * The name of Intel Corporation may not be used to endorse or promote products
|
|
||||||
// derived from this software without specific prior written permission.
|
|
||||||
//
|
|
||||||
// This software is provided by the copyright holders and contributors "as is" and
|
|
||||||
// any express or implied warranties, including, but not limited to, the implied
|
|
||||||
// warranties of merchantability and fitness for a particular purpose are disclaimed.
|
|
||||||
// In no event shall the Intel Corporation or contributors be liable for any direct,
|
|
||||||
// indirect, incidental, special, exemplary, or consequential damages
|
|
||||||
// (including, but not limited to, procurement of substitute goods or services;
|
|
||||||
// loss of use, data, or profits; or business interruption) however caused
|
|
||||||
// and on any theory of liability, whether in contract, strict liability,
|
|
||||||
// or tort (including negligence or otherwise) arising in any way out of
|
|
||||||
// the use of this software, even if advised of the possibility of such damage.
|
|
||||||
//
|
|
||||||
//M*/
|
|
||||||
|
|
||||||
#include "test_precomp.hpp"
|
#include "test_precomp.hpp"
|
||||||
|
|
||||||
@ -46,21 +9,11 @@ namespace opencv_test { namespace {
|
|||||||
using cv::ml::SVM;
|
using cv::ml::SVM;
|
||||||
using cv::ml::TrainData;
|
using cv::ml::TrainData;
|
||||||
|
|
||||||
//--------------------------------------------------------------------------------------------
|
static Ptr<TrainData> makeRandomData(int datasize)
|
||||||
class CV_SVMTrainAutoTest : public cvtest::BaseTest {
|
|
||||||
public:
|
|
||||||
CV_SVMTrainAutoTest() {}
|
|
||||||
protected:
|
|
||||||
virtual void run( int start_from );
|
|
||||||
};
|
|
||||||
|
|
||||||
void CV_SVMTrainAutoTest::run( int /*start_from*/ )
|
|
||||||
{
|
{
|
||||||
int datasize = 100;
|
|
||||||
cv::Mat samples = cv::Mat::zeros( datasize, 2, CV_32FC1 );
|
cv::Mat samples = cv::Mat::zeros( datasize, 2, CV_32FC1 );
|
||||||
cv::Mat responses = cv::Mat::zeros( datasize, 1, CV_32S );
|
cv::Mat responses = cv::Mat::zeros( datasize, 1, CV_32S );
|
||||||
|
RNG &rng = cv::theRNG();
|
||||||
RNG rng(0);
|
|
||||||
for (int i = 0; i < datasize; ++i)
|
for (int i = 0; i < datasize; ++i)
|
||||||
{
|
{
|
||||||
int response = rng.uniform(0, 2); // Random from {0, 1}.
|
int response = rng.uniform(0, 2); // Random from {0, 1}.
|
||||||
@ -68,36 +21,14 @@ void CV_SVMTrainAutoTest::run( int /*start_from*/ )
|
|||||||
samples.at<float>( i, 1 ) = rng.uniform(0.f, 0.5f) + response * 0.5f;
|
samples.at<float>( i, 1 ) = rng.uniform(0.f, 0.5f) + response * 0.5f;
|
||||||
responses.at<int>( i, 0 ) = response;
|
responses.at<int>( i, 0 ) = response;
|
||||||
}
|
}
|
||||||
|
return TrainData::create( samples, cv::ml::ROW_SAMPLE, responses );
|
||||||
cv::Ptr<TrainData> data = TrainData::create( samples, cv::ml::ROW_SAMPLE, responses );
|
|
||||||
cv::Ptr<SVM> svm = SVM::create();
|
|
||||||
svm->trainAuto( data, 10 ); // 2-fold cross validation.
|
|
||||||
|
|
||||||
float test_data0[2] = {0.25f, 0.25f};
|
|
||||||
cv::Mat test_point0 = cv::Mat( 1, 2, CV_32FC1, test_data0 );
|
|
||||||
float result0 = svm->predict( test_point0 );
|
|
||||||
float test_data1[2] = {0.75f, 0.75f};
|
|
||||||
cv::Mat test_point1 = cv::Mat( 1, 2, CV_32FC1, test_data1 );
|
|
||||||
float result1 = svm->predict( test_point1 );
|
|
||||||
|
|
||||||
if ( fabs( result0 - 0 ) > 0.001 || fabs( result1 - 1 ) > 0.001 )
|
|
||||||
{
|
|
||||||
ts->set_failed_test_info( cvtest::TS::FAIL_BAD_ACCURACY );
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
TEST(ML_SVM, trainauto) { CV_SVMTrainAutoTest test; test.safe_run(); }
|
static Ptr<TrainData> makeCircleData(int datasize, float scale_factor, float radius)
|
||||||
|
|
||||||
TEST(ML_SVM, trainauto_sigmoid)
|
|
||||||
{
|
{
|
||||||
const int datasize = 100;
|
// Populate samples with data that can be split into two concentric circles
|
||||||
cv::Mat samples = cv::Mat::zeros( datasize, 2, CV_32FC1 );
|
cv::Mat samples = cv::Mat::zeros( datasize, 2, CV_32FC1 );
|
||||||
cv::Mat responses = cv::Mat::zeros( datasize, 1, CV_32S );
|
cv::Mat responses = cv::Mat::zeros( datasize, 1, CV_32S );
|
||||||
|
|
||||||
const float scale_factor = 0.5;
|
|
||||||
const float radius = 2.0;
|
|
||||||
|
|
||||||
// Populate samples with data that can be split into two concentric circles
|
|
||||||
for (int i = 0; i < datasize; i+=2)
|
for (int i = 0; i < datasize; i+=2)
|
||||||
{
|
{
|
||||||
const float pi = 3.14159f;
|
const float pi = 3.14159f;
|
||||||
@ -115,32 +46,14 @@ TEST(ML_SVM, trainauto_sigmoid)
|
|||||||
samples.at<float>( i + 1, 1 ) = y * scale_factor;
|
samples.at<float>( i + 1, 1 ) = y * scale_factor;
|
||||||
responses.at<int>( i + 1, 0 ) = 1;
|
responses.at<int>( i + 1, 0 ) = 1;
|
||||||
}
|
}
|
||||||
|
return TrainData::create( samples, cv::ml::ROW_SAMPLE, responses );
|
||||||
cv::Ptr<TrainData> data = TrainData::create( samples, cv::ml::ROW_SAMPLE, responses );
|
|
||||||
cv::Ptr<SVM> svm = SVM::create();
|
|
||||||
svm->setKernel(SVM::SIGMOID);
|
|
||||||
|
|
||||||
svm->setGamma(10.0);
|
|
||||||
svm->setCoef0(-10.0);
|
|
||||||
svm->trainAuto( data, 10 ); // 2-fold cross validation.
|
|
||||||
|
|
||||||
float test_data0[2] = {radius, radius};
|
|
||||||
cv::Mat test_point0 = cv::Mat( 1, 2, CV_32FC1, test_data0 );
|
|
||||||
ASSERT_EQ(0, svm->predict( test_point0 ));
|
|
||||||
|
|
||||||
float test_data1[2] = {scale_factor * radius, scale_factor * radius};
|
|
||||||
cv::Mat test_point1 = cv::Mat( 1, 2, CV_32FC1, test_data1 );
|
|
||||||
ASSERT_EQ(1, svm->predict( test_point1 ));
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static Ptr<TrainData> makeRandomData2(int datasize)
|
||||||
TEST(ML_SVM, trainAuto_regression_5369)
|
|
||||||
{
|
{
|
||||||
int datasize = 100;
|
|
||||||
cv::Mat samples = cv::Mat::zeros( datasize, 2, CV_32FC1 );
|
cv::Mat samples = cv::Mat::zeros( datasize, 2, CV_32FC1 );
|
||||||
cv::Mat responses = cv::Mat::zeros( datasize, 1, CV_32S );
|
cv::Mat responses = cv::Mat::zeros( datasize, 1, CV_32S );
|
||||||
|
RNG &rng = cv::theRNG();
|
||||||
RNG rng(0); // fixed!
|
|
||||||
for (int i = 0; i < datasize; ++i)
|
for (int i = 0; i < datasize; ++i)
|
||||||
{
|
{
|
||||||
int response = rng.uniform(0, 2); // Random from {0, 1}.
|
int response = rng.uniform(0, 2); // Random from {0, 1}.
|
||||||
@ -148,8 +61,59 @@ TEST(ML_SVM, trainAuto_regression_5369)
|
|||||||
samples.at<float>( i, 1 ) = (0.5f - response) * rng.uniform(0.f, 1.2f) + response;
|
samples.at<float>( i, 1 ) = (0.5f - response) * rng.uniform(0.f, 1.2f) + response;
|
||||||
responses.at<int>( i, 0 ) = response;
|
responses.at<int>( i, 0 ) = response;
|
||||||
}
|
}
|
||||||
|
return TrainData::create( samples, cv::ml::ROW_SAMPLE, responses );
|
||||||
|
}
|
||||||
|
|
||||||
cv::Ptr<TrainData> data = TrainData::create( samples, cv::ml::ROW_SAMPLE, responses );
|
//==================================================================================================
|
||||||
|
|
||||||
|
TEST(ML_SVM, trainauto)
|
||||||
|
{
|
||||||
|
const int datasize = 100;
|
||||||
|
cv::Ptr<TrainData> data = makeRandomData(datasize);
|
||||||
|
ASSERT_TRUE(data);
|
||||||
|
cv::Ptr<SVM> svm = SVM::create();
|
||||||
|
ASSERT_TRUE(svm);
|
||||||
|
svm->trainAuto( data, 10 ); // 2-fold cross validation.
|
||||||
|
|
||||||
|
float test_data0[2] = {0.25f, 0.25f};
|
||||||
|
cv::Mat test_point0 = cv::Mat( 1, 2, CV_32FC1, test_data0 );
|
||||||
|
float result0 = svm->predict( test_point0 );
|
||||||
|
float test_data1[2] = {0.75f, 0.75f};
|
||||||
|
cv::Mat test_point1 = cv::Mat( 1, 2, CV_32FC1, test_data1 );
|
||||||
|
float result1 = svm->predict( test_point1 );
|
||||||
|
|
||||||
|
EXPECT_NEAR(result0, 0, 0.001);
|
||||||
|
EXPECT_NEAR(result1, 1, 0.001);
|
||||||
|
}
|
||||||
|
|
||||||
|
TEST(ML_SVM, trainauto_sigmoid)
|
||||||
|
{
|
||||||
|
const int datasize = 100;
|
||||||
|
const float scale_factor = 0.5;
|
||||||
|
const float radius = 2.0;
|
||||||
|
cv::Ptr<TrainData> data = makeCircleData(datasize, scale_factor, radius);
|
||||||
|
ASSERT_TRUE(data);
|
||||||
|
|
||||||
|
cv::Ptr<SVM> svm = SVM::create();
|
||||||
|
ASSERT_TRUE(svm);
|
||||||
|
svm->setKernel(SVM::SIGMOID);
|
||||||
|
svm->setGamma(10.0);
|
||||||
|
svm->setCoef0(-10.0);
|
||||||
|
svm->trainAuto( data, 10 ); // 2-fold cross validation.
|
||||||
|
|
||||||
|
float test_data0[2] = {radius, radius};
|
||||||
|
cv::Mat test_point0 = cv::Mat( 1, 2, CV_32FC1, test_data0 );
|
||||||
|
EXPECT_FLOAT_EQ(svm->predict( test_point0 ), 0);
|
||||||
|
|
||||||
|
float test_data1[2] = {scale_factor * radius, scale_factor * radius};
|
||||||
|
cv::Mat test_point1 = cv::Mat( 1, 2, CV_32FC1, test_data1 );
|
||||||
|
EXPECT_FLOAT_EQ(svm->predict( test_point1 ), 1);
|
||||||
|
}
|
||||||
|
|
||||||
|
TEST(ML_SVM, trainAuto_regression_5369)
|
||||||
|
{
|
||||||
|
const int datasize = 100;
|
||||||
|
Ptr<TrainData> data = makeRandomData2(datasize);
|
||||||
cv::Ptr<SVM> svm = SVM::create();
|
cv::Ptr<SVM> svm = SVM::create();
|
||||||
svm->trainAuto( data, 10 ); // 2-fold cross validation.
|
svm->trainAuto( data, 10 ); // 2-fold cross validation.
|
||||||
|
|
||||||
@ -164,16 +128,8 @@ TEST(ML_SVM, trainAuto_regression_5369)
|
|||||||
EXPECT_EQ(1., result1);
|
EXPECT_EQ(1., result1);
|
||||||
}
|
}
|
||||||
|
|
||||||
class CV_SVMGetSupportVectorsTest : public cvtest::BaseTest {
|
TEST(ML_SVM, getSupportVectors)
|
||||||
public:
|
|
||||||
CV_SVMGetSupportVectorsTest() {}
|
|
||||||
protected:
|
|
||||||
virtual void run( int startFrom );
|
|
||||||
};
|
|
||||||
void CV_SVMGetSupportVectorsTest::run(int /*startFrom*/ )
|
|
||||||
{
|
{
|
||||||
int code = cvtest::TS::OK;
|
|
||||||
|
|
||||||
// Set up training data
|
// Set up training data
|
||||||
int labels[4] = {1, -1, -1, -1};
|
int labels[4] = {1, -1, -1, -1};
|
||||||
float trainingData[4][2] = { {501, 10}, {255, 10}, {501, 255}, {10, 501} };
|
float trainingData[4][2] = { {501, 10}, {255, 10}, {501, 255}, {10, 501} };
|
||||||
@ -181,19 +137,18 @@ void CV_SVMGetSupportVectorsTest::run(int /*startFrom*/ )
|
|||||||
Mat labelsMat(4, 1, CV_32SC1, labels);
|
Mat labelsMat(4, 1, CV_32SC1, labels);
|
||||||
|
|
||||||
Ptr<SVM> svm = SVM::create();
|
Ptr<SVM> svm = SVM::create();
|
||||||
|
ASSERT_TRUE(svm);
|
||||||
svm->setType(SVM::C_SVC);
|
svm->setType(SVM::C_SVC);
|
||||||
svm->setTermCriteria(TermCriteria(TermCriteria::MAX_ITER, 100, 1e-6));
|
svm->setTermCriteria(TermCriteria(TermCriteria::MAX_ITER, 100, 1e-6));
|
||||||
|
|
||||||
|
|
||||||
// Test retrieval of SVs and compressed SVs on linear SVM
|
// Test retrieval of SVs and compressed SVs on linear SVM
|
||||||
svm->setKernel(SVM::LINEAR);
|
svm->setKernel(SVM::LINEAR);
|
||||||
svm->train(trainingDataMat, cv::ml::ROW_SAMPLE, labelsMat);
|
svm->train(trainingDataMat, cv::ml::ROW_SAMPLE, labelsMat);
|
||||||
|
|
||||||
Mat sv = svm->getSupportVectors();
|
Mat sv = svm->getSupportVectors();
|
||||||
CV_Assert(sv.rows == 1); // by default compressed SV returned
|
EXPECT_EQ(1, sv.rows); // by default compressed SV returned
|
||||||
sv = svm->getUncompressedSupportVectors();
|
sv = svm->getUncompressedSupportVectors();
|
||||||
CV_Assert(sv.rows == 3);
|
EXPECT_EQ(3, sv.rows);
|
||||||
|
|
||||||
|
|
||||||
// Test retrieval of SVs and compressed SVs on non-linear SVM
|
// Test retrieval of SVs and compressed SVs on non-linear SVM
|
||||||
svm->setKernel(SVM::POLY);
|
svm->setKernel(SVM::POLY);
|
||||||
@ -201,15 +156,9 @@ void CV_SVMGetSupportVectorsTest::run(int /*startFrom*/ )
|
|||||||
svm->train(trainingDataMat, cv::ml::ROW_SAMPLE, labelsMat);
|
svm->train(trainingDataMat, cv::ml::ROW_SAMPLE, labelsMat);
|
||||||
|
|
||||||
sv = svm->getSupportVectors();
|
sv = svm->getSupportVectors();
|
||||||
CV_Assert(sv.rows == 3);
|
EXPECT_EQ(3, sv.rows);
|
||||||
sv = svm->getUncompressedSupportVectors();
|
sv = svm->getUncompressedSupportVectors();
|
||||||
CV_Assert(sv.rows == 0); // inapplicable for non-linear SVMs
|
EXPECT_EQ(0, sv.rows); // inapplicable for non-linear SVMs
|
||||||
|
|
||||||
|
|
||||||
ts->set_failed_test_info(code);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
TEST(ML_SVM, getSupportVectors) { CV_SVMGetSupportVectorsTest test; test.safe_run(); }
|
|
||||||
|
|
||||||
}} // namespace
|
}} // namespace
|
||||||
|
189
modules/ml/test/test_utils.cpp
Normal file
189
modules/ml/test/test_utils.cpp
Normal file
@ -0,0 +1,189 @@
|
|||||||
|
// This file is part of OpenCV project.
|
||||||
|
// It is subject to the license terms in the LICENSE file found in the top-level directory
|
||||||
|
// of this distribution and at http://opencv.org/license.html.
|
||||||
|
#include "test_precomp.hpp"
|
||||||
|
|
||||||
|
namespace opencv_test {
|
||||||
|
|
||||||
|
void defaultDistribs( Mat& means, vector<Mat>& covs, int type)
|
||||||
|
{
|
||||||
|
float mp0[] = {0.0f, 0.0f}, cp0[] = {0.67f, 0.0f, 0.0f, 0.67f};
|
||||||
|
float mp1[] = {5.0f, 0.0f}, cp1[] = {1.0f, 0.0f, 0.0f, 1.0f};
|
||||||
|
float mp2[] = {1.0f, 5.0f}, cp2[] = {1.0f, 0.0f, 0.0f, 1.0f};
|
||||||
|
means.create(3, 2, type);
|
||||||
|
Mat m0( 1, 2, CV_32FC1, mp0 ), c0( 2, 2, CV_32FC1, cp0 );
|
||||||
|
Mat m1( 1, 2, CV_32FC1, mp1 ), c1( 2, 2, CV_32FC1, cp1 );
|
||||||
|
Mat m2( 1, 2, CV_32FC1, mp2 ), c2( 2, 2, CV_32FC1, cp2 );
|
||||||
|
means.resize(3), covs.resize(3);
|
||||||
|
|
||||||
|
Mat mr0 = means.row(0);
|
||||||
|
m0.convertTo(mr0, type);
|
||||||
|
c0.convertTo(covs[0], type);
|
||||||
|
|
||||||
|
Mat mr1 = means.row(1);
|
||||||
|
m1.convertTo(mr1, type);
|
||||||
|
c1.convertTo(covs[1], type);
|
||||||
|
|
||||||
|
Mat mr2 = means.row(2);
|
||||||
|
m2.convertTo(mr2, type);
|
||||||
|
c2.convertTo(covs[2], type);
|
||||||
|
}
|
||||||
|
|
||||||
|
// generate points sets by normal distributions
|
||||||
|
void generateData( Mat& data, Mat& labels, const vector<int>& sizes, const Mat& _means, const vector<Mat>& covs, int dataType, int labelType )
|
||||||
|
{
|
||||||
|
vector<int>::const_iterator sit = sizes.begin();
|
||||||
|
int total = 0;
|
||||||
|
for( ; sit != sizes.end(); ++sit )
|
||||||
|
total += *sit;
|
||||||
|
CV_Assert( _means.rows == (int)sizes.size() && covs.size() == sizes.size() );
|
||||||
|
CV_Assert( !data.empty() && data.rows == total );
|
||||||
|
CV_Assert( data.type() == dataType );
|
||||||
|
|
||||||
|
labels.create( data.rows, 1, labelType );
|
||||||
|
|
||||||
|
randn( data, Scalar::all(-1.0), Scalar::all(1.0) );
|
||||||
|
vector<Mat> means(sizes.size());
|
||||||
|
for(int i = 0; i < _means.rows; i++)
|
||||||
|
means[i] = _means.row(i);
|
||||||
|
vector<Mat>::const_iterator mit = means.begin(), cit = covs.begin();
|
||||||
|
int bi, ei = 0;
|
||||||
|
sit = sizes.begin();
|
||||||
|
for( int p = 0, l = 0; sit != sizes.end(); ++sit, ++mit, ++cit, l++ )
|
||||||
|
{
|
||||||
|
bi = ei;
|
||||||
|
ei = bi + *sit;
|
||||||
|
CV_Assert( mit->rows == 1 && mit->cols == data.cols );
|
||||||
|
CV_Assert( cit->rows == data.cols && cit->cols == data.cols );
|
||||||
|
for( int i = bi; i < ei; i++, p++ )
|
||||||
|
{
|
||||||
|
Mat r = data.row(i);
|
||||||
|
r = r * (*cit) + *mit;
|
||||||
|
if( labelType == CV_32FC1 )
|
||||||
|
labels.at<float>(p, 0) = (float)l;
|
||||||
|
else if( labelType == CV_32SC1 )
|
||||||
|
labels.at<int>(p, 0) = l;
|
||||||
|
else
|
||||||
|
{
|
||||||
|
CV_DbgAssert(0);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
int maxIdx( const vector<int>& count )
|
||||||
|
{
|
||||||
|
int idx = -1;
|
||||||
|
int maxVal = -1;
|
||||||
|
vector<int>::const_iterator it = count.begin();
|
||||||
|
for( int i = 0; it != count.end(); ++it, i++ )
|
||||||
|
{
|
||||||
|
if( *it > maxVal)
|
||||||
|
{
|
||||||
|
maxVal = *it;
|
||||||
|
idx = i;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
CV_Assert( idx >= 0);
|
||||||
|
return idx;
|
||||||
|
}
|
||||||
|
|
||||||
|
bool getLabelsMap( const Mat& labels, const vector<int>& sizes, vector<int>& labelsMap, bool checkClusterUniq)
|
||||||
|
{
|
||||||
|
size_t total = 0, nclusters = sizes.size();
|
||||||
|
for(size_t i = 0; i < sizes.size(); i++)
|
||||||
|
total += sizes[i];
|
||||||
|
|
||||||
|
CV_Assert( !labels.empty() );
|
||||||
|
CV_Assert( labels.total() == total && (labels.cols == 1 || labels.rows == 1));
|
||||||
|
CV_Assert( labels.type() == CV_32SC1 || labels.type() == CV_32FC1 );
|
||||||
|
|
||||||
|
bool isFlt = labels.type() == CV_32FC1;
|
||||||
|
|
||||||
|
labelsMap.resize(nclusters);
|
||||||
|
|
||||||
|
vector<bool> buzy(nclusters, false);
|
||||||
|
int startIndex = 0;
|
||||||
|
for( size_t clusterIndex = 0; clusterIndex < sizes.size(); clusterIndex++ )
|
||||||
|
{
|
||||||
|
vector<int> count( nclusters, 0 );
|
||||||
|
for( int i = startIndex; i < startIndex + sizes[clusterIndex]; i++)
|
||||||
|
{
|
||||||
|
int lbl = isFlt ? (int)labels.at<float>(i) : labels.at<int>(i);
|
||||||
|
CV_Assert(lbl < (int)nclusters);
|
||||||
|
count[lbl]++;
|
||||||
|
CV_Assert(count[lbl] < (int)total);
|
||||||
|
}
|
||||||
|
startIndex += sizes[clusterIndex];
|
||||||
|
|
||||||
|
int cls = maxIdx( count );
|
||||||
|
CV_Assert( !checkClusterUniq || !buzy[cls] );
|
||||||
|
|
||||||
|
labelsMap[clusterIndex] = cls;
|
||||||
|
|
||||||
|
buzy[cls] = true;
|
||||||
|
}
|
||||||
|
|
||||||
|
if(checkClusterUniq)
|
||||||
|
{
|
||||||
|
for(size_t i = 0; i < buzy.size(); i++)
|
||||||
|
if(!buzy[i])
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
bool calcErr( const Mat& labels, const Mat& origLabels, const vector<int>& sizes, float& err, bool labelsEquivalent, bool checkClusterUniq)
|
||||||
|
{
|
||||||
|
err = 0;
|
||||||
|
CV_Assert( !labels.empty() && !origLabels.empty() );
|
||||||
|
CV_Assert( labels.rows == 1 || labels.cols == 1 );
|
||||||
|
CV_Assert( origLabels.rows == 1 || origLabels.cols == 1 );
|
||||||
|
CV_Assert( labels.total() == origLabels.total() );
|
||||||
|
CV_Assert( labels.type() == CV_32SC1 || labels.type() == CV_32FC1 );
|
||||||
|
CV_Assert( origLabels.type() == labels.type() );
|
||||||
|
|
||||||
|
vector<int> labelsMap;
|
||||||
|
bool isFlt = labels.type() == CV_32FC1;
|
||||||
|
if( !labelsEquivalent )
|
||||||
|
{
|
||||||
|
if( !getLabelsMap( labels, sizes, labelsMap, checkClusterUniq ) )
|
||||||
|
return false;
|
||||||
|
|
||||||
|
for( int i = 0; i < labels.rows; i++ )
|
||||||
|
if( isFlt )
|
||||||
|
err += labels.at<float>(i) != labelsMap[(int)origLabels.at<float>(i)] ? 1.f : 0.f;
|
||||||
|
else
|
||||||
|
err += labels.at<int>(i) != labelsMap[origLabels.at<int>(i)] ? 1.f : 0.f;
|
||||||
|
}
|
||||||
|
else
|
||||||
|
{
|
||||||
|
for( int i = 0; i < labels.rows; i++ )
|
||||||
|
if( isFlt )
|
||||||
|
err += labels.at<float>(i) != origLabels.at<float>(i) ? 1.f : 0.f;
|
||||||
|
else
|
||||||
|
err += labels.at<int>(i) != origLabels.at<int>(i) ? 1.f : 0.f;
|
||||||
|
}
|
||||||
|
err /= (float)labels.rows;
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
bool calculateError( const Mat& _p_labels, const Mat& _o_labels, float& error)
|
||||||
|
{
|
||||||
|
error = 0.0f;
|
||||||
|
float accuracy = 0.0f;
|
||||||
|
Mat _p_labels_temp;
|
||||||
|
Mat _o_labels_temp;
|
||||||
|
_p_labels.convertTo(_p_labels_temp, CV_32S);
|
||||||
|
_o_labels.convertTo(_o_labels_temp, CV_32S);
|
||||||
|
|
||||||
|
CV_Assert(_p_labels_temp.total() == _o_labels_temp.total());
|
||||||
|
CV_Assert(_p_labels_temp.rows == _o_labels_temp.rows);
|
||||||
|
|
||||||
|
accuracy = (float)countNonZero(_p_labels_temp == _o_labels_temp)/_p_labels_temp.rows;
|
||||||
|
error = 1 - accuracy;
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
} // namespace
|
@ -1,38 +1,66 @@
|
|||||||
#!/usr/bin/env python
|
#!/usr/bin/env python
|
||||||
from __future__ import print_function
|
from __future__ import print_function
|
||||||
|
|
||||||
|
import ctypes
|
||||||
|
from functools import partial
|
||||||
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import cv2 as cv
|
import cv2 as cv
|
||||||
|
|
||||||
from tests_common import NewOpenCVTests
|
from tests_common import NewOpenCVTests, unittest
|
||||||
|
|
||||||
|
|
||||||
|
def is_numeric(dtype):
|
||||||
|
return np.issubdtype(dtype, np.integer) or np.issubdtype(dtype, np.floating)
|
||||||
|
|
||||||
|
|
||||||
|
def get_limits(dtype):
|
||||||
|
if not is_numeric(dtype):
|
||||||
|
return None, None
|
||||||
|
|
||||||
|
if np.issubdtype(dtype, np.integer):
|
||||||
|
info = np.iinfo(dtype)
|
||||||
|
else:
|
||||||
|
info = np.finfo(dtype)
|
||||||
|
return info.min, info.max
|
||||||
|
|
||||||
|
|
||||||
|
def get_conversion_error_msg(value, expected, actual):
|
||||||
|
return 'Conversion "{}" of type "{}" failed\nExpected: "{}" vs Actual "{}"'.format(
|
||||||
|
value, type(value).__name__, expected, actual
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def get_no_exception_msg(value):
|
||||||
|
return 'Exception is not risen for {} of type {}'.format(value, type(value).__name__)
|
||||||
|
|
||||||
class Bindings(NewOpenCVTests):
|
class Bindings(NewOpenCVTests):
|
||||||
|
|
||||||
def test_inheritance(self):
|
def test_inheritance(self):
|
||||||
bm = cv.StereoBM_create()
|
bm = cv.StereoBM_create()
|
||||||
bm.getPreFilterCap() # from StereoBM
|
bm.getPreFilterCap() # from StereoBM
|
||||||
bm.getBlockSize() # from SteroMatcher
|
bm.getBlockSize() # from SteroMatcher
|
||||||
|
|
||||||
boost = cv.ml.Boost_create()
|
boost = cv.ml.Boost_create()
|
||||||
boost.getBoostType() # from ml::Boost
|
boost.getBoostType() # from ml::Boost
|
||||||
boost.getMaxDepth() # from ml::DTrees
|
boost.getMaxDepth() # from ml::DTrees
|
||||||
boost.isClassifier() # from ml::StatModel
|
boost.isClassifier() # from ml::StatModel
|
||||||
|
|
||||||
|
|
||||||
def test_redirectError(self):
|
def test_redirectError(self):
|
||||||
try:
|
try:
|
||||||
cv.imshow("", None) # This causes an assert
|
cv.imshow("", None) # This causes an assert
|
||||||
self.assertEqual("Dead code", 0)
|
self.assertEqual("Dead code", 0)
|
||||||
except cv.error as _e:
|
except cv.error as _e:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
handler_called = [False]
|
handler_called = [False]
|
||||||
|
|
||||||
def test_error_handler(status, func_name, err_msg, file_name, line):
|
def test_error_handler(status, func_name, err_msg, file_name, line):
|
||||||
handler_called[0] = True
|
handler_called[0] = True
|
||||||
|
|
||||||
cv.redirectError(test_error_handler)
|
cv.redirectError(test_error_handler)
|
||||||
try:
|
try:
|
||||||
cv.imshow("", None) # This causes an assert
|
cv.imshow("", None) # This causes an assert
|
||||||
self.assertEqual("Dead code", 0)
|
self.assertEqual("Dead code", 0)
|
||||||
except cv.error as _e:
|
except cv.error as _e:
|
||||||
self.assertEqual(handler_called[0], True)
|
self.assertEqual(handler_called[0], True)
|
||||||
@ -40,7 +68,7 @@ class Bindings(NewOpenCVTests):
|
|||||||
|
|
||||||
cv.redirectError(None)
|
cv.redirectError(None)
|
||||||
try:
|
try:
|
||||||
cv.imshow("", None) # This causes an assert
|
cv.imshow("", None) # This causes an assert
|
||||||
self.assertEqual("Dead code", 0)
|
self.assertEqual("Dead code", 0)
|
||||||
except cv.error as _e:
|
except cv.error as _e:
|
||||||
pass
|
pass
|
||||||
@ -48,41 +76,248 @@ class Bindings(NewOpenCVTests):
|
|||||||
|
|
||||||
class Arguments(NewOpenCVTests):
|
class Arguments(NewOpenCVTests):
|
||||||
|
|
||||||
|
def _try_to_convert(self, conversion, value):
|
||||||
|
try:
|
||||||
|
result = conversion(value).lower()
|
||||||
|
except Exception as e:
|
||||||
|
self.fail(
|
||||||
|
'{} "{}" is risen for conversion {} of type {}'.format(
|
||||||
|
type(e).__name__, e, value, type(value).__name__
|
||||||
|
)
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
return result
|
||||||
|
|
||||||
def test_InputArray(self):
|
def test_InputArray(self):
|
||||||
res1 = cv.utils.dumpInputArray(None)
|
res1 = cv.utils.dumpInputArray(None)
|
||||||
#self.assertEqual(res1, "InputArray: noArray()") # not supported
|
# self.assertEqual(res1, "InputArray: noArray()") # not supported
|
||||||
self.assertEqual(res1, "InputArray: empty()=true kind=0x00010000 flags=0x01010000 total(-1)=0 dims(-1)=0 size(-1)=0x0 type(-1)=CV_8UC1")
|
self.assertEqual(res1, "InputArray: empty()=true kind=0x00010000 flags=0x01010000 total(-1)=0 dims(-1)=0 size(-1)=0x0 type(-1)=CV_8UC1")
|
||||||
res2_1 = cv.utils.dumpInputArray((1, 2))
|
res2_1 = cv.utils.dumpInputArray((1, 2))
|
||||||
self.assertEqual(res2_1, "InputArray: empty()=false kind=0x00010000 flags=0x01010000 total(-1)=2 dims(-1)=2 size(-1)=1x2 type(-1)=CV_64FC1")
|
self.assertEqual(res2_1, "InputArray: empty()=false kind=0x00010000 flags=0x01010000 total(-1)=2 dims(-1)=2 size(-1)=1x2 type(-1)=CV_64FC1")
|
||||||
res2_2 = cv.utils.dumpInputArray(1.5) # Scalar(1.5, 1.5, 1.5, 1.5)
|
res2_2 = cv.utils.dumpInputArray(1.5) # Scalar(1.5, 1.5, 1.5, 1.5)
|
||||||
self.assertEqual(res2_2, "InputArray: empty()=false kind=0x00010000 flags=0x01010000 total(-1)=4 dims(-1)=2 size(-1)=1x4 type(-1)=CV_64FC1")
|
self.assertEqual(res2_2, "InputArray: empty()=false kind=0x00010000 flags=0x01010000 total(-1)=4 dims(-1)=2 size(-1)=1x4 type(-1)=CV_64FC1")
|
||||||
a = np.array([[1,2],[3,4],[5,6]])
|
a = np.array([[1, 2], [3, 4], [5, 6]])
|
||||||
res3 = cv.utils.dumpInputArray(a) # 32SC1
|
res3 = cv.utils.dumpInputArray(a) # 32SC1
|
||||||
self.assertEqual(res3, "InputArray: empty()=false kind=0x00010000 flags=0x01010000 total(-1)=6 dims(-1)=2 size(-1)=2x3 type(-1)=CV_32SC1")
|
self.assertEqual(res3, "InputArray: empty()=false kind=0x00010000 flags=0x01010000 total(-1)=6 dims(-1)=2 size(-1)=2x3 type(-1)=CV_32SC1")
|
||||||
a = np.array([[[1,2],[3,4],[5,6]]], dtype='f')
|
a = np.array([[[1, 2], [3, 4], [5, 6]]], dtype='f')
|
||||||
res4 = cv.utils.dumpInputArray(a) # 32FC2
|
res4 = cv.utils.dumpInputArray(a) # 32FC2
|
||||||
self.assertEqual(res4, "InputArray: empty()=false kind=0x00010000 flags=0x01010000 total(-1)=3 dims(-1)=2 size(-1)=3x1 type(-1)=CV_32FC2")
|
self.assertEqual(res4, "InputArray: empty()=false kind=0x00010000 flags=0x01010000 total(-1)=3 dims(-1)=2 size(-1)=3x1 type(-1)=CV_32FC2")
|
||||||
a = np.array([[[1,2]],[[3,4]],[[5,6]]], dtype=float)
|
a = np.array([[[1, 2]], [[3, 4]], [[5, 6]]], dtype=float)
|
||||||
res5 = cv.utils.dumpInputArray(a) # 64FC2
|
res5 = cv.utils.dumpInputArray(a) # 64FC2
|
||||||
self.assertEqual(res5, "InputArray: empty()=false kind=0x00010000 flags=0x01010000 total(-1)=3 dims(-1)=2 size(-1)=1x3 type(-1)=CV_64FC2")
|
self.assertEqual(res5, "InputArray: empty()=false kind=0x00010000 flags=0x01010000 total(-1)=3 dims(-1)=2 size(-1)=1x3 type(-1)=CV_64FC2")
|
||||||
|
|
||||||
|
|
||||||
def test_InputArrayOfArrays(self):
|
def test_InputArrayOfArrays(self):
|
||||||
res1 = cv.utils.dumpInputArrayOfArrays(None)
|
res1 = cv.utils.dumpInputArrayOfArrays(None)
|
||||||
#self.assertEqual(res1, "InputArray: noArray()") # not supported
|
# self.assertEqual(res1, "InputArray: noArray()") # not supported
|
||||||
self.assertEqual(res1, "InputArrayOfArrays: empty()=true kind=0x00050000 flags=0x01050000 total(-1)=0 dims(-1)=1 size(-1)=0x0")
|
self.assertEqual(res1, "InputArrayOfArrays: empty()=true kind=0x00050000 flags=0x01050000 total(-1)=0 dims(-1)=1 size(-1)=0x0")
|
||||||
res2_1 = cv.utils.dumpInputArrayOfArrays((1, 2)) # { Scalar:all(1), Scalar::all(2) }
|
res2_1 = cv.utils.dumpInputArrayOfArrays((1, 2)) # { Scalar:all(1), Scalar::all(2) }
|
||||||
self.assertEqual(res2_1, "InputArrayOfArrays: empty()=false kind=0x00050000 flags=0x01050000 total(-1)=2 dims(-1)=1 size(-1)=2x1 type(0)=CV_64FC1 dims(0)=2 size(0)=1x4 type(0)=CV_64FC1")
|
self.assertEqual(res2_1, "InputArrayOfArrays: empty()=false kind=0x00050000 flags=0x01050000 total(-1)=2 dims(-1)=1 size(-1)=2x1 type(0)=CV_64FC1 dims(0)=2 size(0)=1x4 type(0)=CV_64FC1")
|
||||||
res2_2 = cv.utils.dumpInputArrayOfArrays([1.5])
|
res2_2 = cv.utils.dumpInputArrayOfArrays([1.5])
|
||||||
self.assertEqual(res2_2, "InputArrayOfArrays: empty()=false kind=0x00050000 flags=0x01050000 total(-1)=1 dims(-1)=1 size(-1)=1x1 type(0)=CV_64FC1 dims(0)=2 size(0)=1x4 type(0)=CV_64FC1")
|
self.assertEqual(res2_2, "InputArrayOfArrays: empty()=false kind=0x00050000 flags=0x01050000 total(-1)=1 dims(-1)=1 size(-1)=1x1 type(0)=CV_64FC1 dims(0)=2 size(0)=1x4 type(0)=CV_64FC1")
|
||||||
a = np.array([[1,2],[3,4],[5,6]])
|
a = np.array([[1, 2], [3, 4], [5, 6]])
|
||||||
b = np.array([[1,2,3],[4,5,6],[7,8,9]])
|
b = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
|
||||||
res3 = cv.utils.dumpInputArrayOfArrays([a, b])
|
res3 = cv.utils.dumpInputArrayOfArrays([a, b])
|
||||||
self.assertEqual(res3, "InputArrayOfArrays: empty()=false kind=0x00050000 flags=0x01050000 total(-1)=2 dims(-1)=1 size(-1)=2x1 type(0)=CV_32SC1 dims(0)=2 size(0)=2x3 type(0)=CV_32SC1")
|
self.assertEqual(res3, "InputArrayOfArrays: empty()=false kind=0x00050000 flags=0x01050000 total(-1)=2 dims(-1)=1 size(-1)=2x1 type(0)=CV_32SC1 dims(0)=2 size(0)=2x3 type(0)=CV_32SC1")
|
||||||
c = np.array([[[1,2],[3,4],[5,6]]], dtype='f')
|
c = np.array([[[1, 2], [3, 4], [5, 6]]], dtype='f')
|
||||||
res4 = cv.utils.dumpInputArrayOfArrays([c, a, b])
|
res4 = cv.utils.dumpInputArrayOfArrays([c, a, b])
|
||||||
self.assertEqual(res4, "InputArrayOfArrays: empty()=false kind=0x00050000 flags=0x01050000 total(-1)=3 dims(-1)=1 size(-1)=3x1 type(0)=CV_32FC2 dims(0)=2 size(0)=3x1 type(0)=CV_32FC2")
|
self.assertEqual(res4, "InputArrayOfArrays: empty()=false kind=0x00050000 flags=0x01050000 total(-1)=3 dims(-1)=1 size(-1)=3x1 type(0)=CV_32FC2 dims(0)=2 size(0)=3x1 type(0)=CV_32FC2")
|
||||||
|
|
||||||
|
def test_parse_to_bool_convertible(self):
|
||||||
|
try_to_convert = partial(self._try_to_convert, cv.utils.dumpBool)
|
||||||
|
for convertible_true in (True, 1, 64, np.bool(1), np.int8(123), np.int16(11), np.int32(2),
|
||||||
|
np.int64(1), np.bool_(3), np.bool8(12)):
|
||||||
|
actual = try_to_convert(convertible_true)
|
||||||
|
self.assertEqual('bool: true', actual,
|
||||||
|
msg=get_conversion_error_msg(convertible_true, 'bool: true', actual))
|
||||||
|
|
||||||
|
for convertible_false in (False, 0, np.uint8(0), np.bool_(0), np.int_(0)):
|
||||||
|
actual = try_to_convert(convertible_false)
|
||||||
|
self.assertEqual('bool: false', actual,
|
||||||
|
msg=get_conversion_error_msg(convertible_false, 'bool: false', actual))
|
||||||
|
|
||||||
|
def test_parse_to_bool_not_convertible(self):
|
||||||
|
for not_convertible in (1.2, np.float(2.3), 's', 'str', (1, 2), [1, 2], complex(1, 1), None,
|
||||||
|
complex(imag=2), complex(1.1), np.array([1, 0], dtype=np.bool)):
|
||||||
|
with self.assertRaises((TypeError, OverflowError),
|
||||||
|
msg=get_no_exception_msg(not_convertible)):
|
||||||
|
_ = cv.utils.dumpBool(not_convertible)
|
||||||
|
|
||||||
|
@unittest.skip('Wrong conversion behavior')
|
||||||
|
def test_parse_to_bool_convertible_extra(self):
|
||||||
|
try_to_convert = partial(self._try_to_convert, cv.utils.dumpBool)
|
||||||
|
_, max_size_t = get_limits(ctypes.c_size_t)
|
||||||
|
for convertible_true in (-1, max_size_t):
|
||||||
|
actual = try_to_convert(convertible_true)
|
||||||
|
self.assertEqual('bool: true', actual,
|
||||||
|
msg=get_conversion_error_msg(convertible_true, 'bool: true', actual))
|
||||||
|
|
||||||
|
@unittest.skip('Wrong conversion behavior')
|
||||||
|
def test_parse_to_bool_not_convertible_extra(self):
|
||||||
|
for not_convertible in (np.array([False]), np.array([True], dtype=np.bool)):
|
||||||
|
with self.assertRaises((TypeError, OverflowError),
|
||||||
|
msg=get_no_exception_msg(not_convertible)):
|
||||||
|
_ = cv.utils.dumpBool(not_convertible)
|
||||||
|
|
||||||
|
def test_parse_to_int_convertible(self):
|
||||||
|
try_to_convert = partial(self._try_to_convert, cv.utils.dumpInt)
|
||||||
|
min_int, max_int = get_limits(ctypes.c_int)
|
||||||
|
for convertible in (-10, -1, 2, int(43.2), np.uint8(15), np.int8(33), np.int16(-13),
|
||||||
|
np.int32(4), np.int64(345), (23), min_int, max_int, np.int_(33)):
|
||||||
|
expected = 'int: {0:d}'.format(convertible)
|
||||||
|
actual = try_to_convert(convertible)
|
||||||
|
self.assertEqual(expected, actual,
|
||||||
|
msg=get_conversion_error_msg(convertible, expected, actual))
|
||||||
|
|
||||||
|
def test_parse_to_int_not_convertible(self):
|
||||||
|
min_int, max_int = get_limits(ctypes.c_int)
|
||||||
|
for not_convertible in (1.2, np.float(4), float(3), np.double(45), 's', 'str',
|
||||||
|
np.array([1, 2]), (1,), [1, 2], min_int - 1, max_int + 1,
|
||||||
|
complex(1, 1), complex(imag=2), complex(1.1), None):
|
||||||
|
with self.assertRaises((TypeError, OverflowError, ValueError),
|
||||||
|
msg=get_no_exception_msg(not_convertible)):
|
||||||
|
_ = cv.utils.dumpInt(not_convertible)
|
||||||
|
|
||||||
|
@unittest.skip('Wrong conversion behavior')
|
||||||
|
def test_parse_to_int_not_convertible_extra(self):
|
||||||
|
for not_convertible in (np.bool_(True), True, False, np.float32(2.3),
|
||||||
|
np.array([3, ], dtype=int), np.array([-2, ], dtype=np.int32),
|
||||||
|
np.array([1, ], dtype=np.int), np.array([11, ], dtype=np.uint8)):
|
||||||
|
with self.assertRaises((TypeError, OverflowError),
|
||||||
|
msg=get_no_exception_msg(not_convertible)):
|
||||||
|
_ = cv.utils.dumpInt(not_convertible)
|
||||||
|
|
||||||
|
def test_parse_to_size_t_convertible(self):
|
||||||
|
try_to_convert = partial(self._try_to_convert, cv.utils.dumpSizeT)
|
||||||
|
_, max_uint = get_limits(ctypes.c_uint)
|
||||||
|
for convertible in (2, True, False, max_uint, (12), np.uint8(34), np.int8(12), np.int16(23),
|
||||||
|
np.int32(123), np.int64(344), np.uint64(3), np.uint16(2), np.uint32(5),
|
||||||
|
np.uint(44)):
|
||||||
|
expected = 'size_t: {0:d}'.format(convertible).lower()
|
||||||
|
actual = try_to_convert(convertible)
|
||||||
|
self.assertEqual(expected, actual,
|
||||||
|
msg=get_conversion_error_msg(convertible, expected, actual))
|
||||||
|
|
||||||
|
def test_parse_to_size_t_not_convertible(self):
|
||||||
|
for not_convertible in (1.2, np.float(4), float(3), np.double(45), 's', 'str',
|
||||||
|
np.array([1, 2]), (1,), [1, 2], np.float64(6), complex(1, 1),
|
||||||
|
complex(imag=2), complex(1.1), None):
|
||||||
|
with self.assertRaises((TypeError, OverflowError),
|
||||||
|
msg=get_no_exception_msg(not_convertible)):
|
||||||
|
_ = cv.utils.dumpSizeT(not_convertible)
|
||||||
|
|
||||||
|
@unittest.skip('Wrong conversion behavior')
|
||||||
|
def test_parse_to_size_t_convertible_extra(self):
|
||||||
|
try_to_convert = partial(self._try_to_convert, cv.utils.dumpSizeT)
|
||||||
|
_, max_size_t = get_limits(ctypes.c_size_t)
|
||||||
|
for convertible in (max_size_t,):
|
||||||
|
expected = 'size_t: {0:d}'.format(convertible).lower()
|
||||||
|
actual = try_to_convert(convertible)
|
||||||
|
self.assertEqual(expected, actual,
|
||||||
|
msg=get_conversion_error_msg(convertible, expected, actual))
|
||||||
|
|
||||||
|
@unittest.skip('Wrong conversion behavior')
|
||||||
|
def test_parse_to_size_t_not_convertible_extra(self):
|
||||||
|
for not_convertible in (np.bool_(True), True, False, np.array([123, ], dtype=np.uint8),):
|
||||||
|
with self.assertRaises((TypeError, OverflowError),
|
||||||
|
msg=get_no_exception_msg(not_convertible)):
|
||||||
|
_ = cv.utils.dumpSizeT(not_convertible)
|
||||||
|
|
||||||
|
def test_parse_to_float_convertible(self):
|
||||||
|
try_to_convert = partial(self._try_to_convert, cv.utils.dumpFloat)
|
||||||
|
min_float, max_float = get_limits(ctypes.c_float)
|
||||||
|
for convertible in (2, -13, 1.24, float(32), np.float(32.45), np.double(12.23),
|
||||||
|
np.float32(-12.3), np.float64(3.22), np.float_(-1.5), min_float,
|
||||||
|
max_float, np.inf, -np.inf, float('Inf'), -float('Inf'),
|
||||||
|
np.double(np.inf), np.double(-np.inf), np.double(float('Inf')),
|
||||||
|
np.double(-float('Inf'))):
|
||||||
|
expected = 'Float: {0:.2f}'.format(convertible).lower()
|
||||||
|
actual = try_to_convert(convertible)
|
||||||
|
self.assertEqual(expected, actual,
|
||||||
|
msg=get_conversion_error_msg(convertible, expected, actual))
|
||||||
|
|
||||||
|
# Workaround for Windows NaN tests due to Visual C runtime
|
||||||
|
# special floating point values (indefinite NaN)
|
||||||
|
for nan in (float('NaN'), np.nan, np.float32(np.nan), np.double(np.nan),
|
||||||
|
np.double(float('NaN'))):
|
||||||
|
actual = try_to_convert(nan)
|
||||||
|
self.assertIn('nan', actual, msg="Can't convert nan of type {} to float. "
|
||||||
|
"Actual: {}".format(type(nan).__name__, actual))
|
||||||
|
|
||||||
|
min_double, max_double = get_limits(ctypes.c_double)
|
||||||
|
for inf in (min_float * 10, max_float * 10, min_double, max_double):
|
||||||
|
expected = 'float: {}inf'.format('-' if inf < 0 else '')
|
||||||
|
actual = try_to_convert(inf)
|
||||||
|
self.assertEqual(expected, actual,
|
||||||
|
msg=get_conversion_error_msg(inf, expected, actual))
|
||||||
|
|
||||||
|
def test_parse_to_float_not_convertible(self):
|
||||||
|
for not_convertible in ('s', 'str', (12,), [1, 2], None, np.array([1, 2], dtype=np.float),
|
||||||
|
np.array([1, 2], dtype=np.double), complex(1, 1), complex(imag=2),
|
||||||
|
complex(1.1)):
|
||||||
|
with self.assertRaises((TypeError), msg=get_no_exception_msg(not_convertible)):
|
||||||
|
_ = cv.utils.dumpFloat(not_convertible)
|
||||||
|
|
||||||
|
@unittest.skip('Wrong conversion behavior')
|
||||||
|
def test_parse_to_float_not_convertible_extra(self):
|
||||||
|
for not_convertible in (np.bool_(False), True, False, np.array([123, ], dtype=int),
|
||||||
|
np.array([1., ]), np.array([False]),
|
||||||
|
np.array([True], dtype=np.bool)):
|
||||||
|
with self.assertRaises((TypeError, OverflowError),
|
||||||
|
msg=get_no_exception_msg(not_convertible)):
|
||||||
|
_ = cv.utils.dumpFloat(not_convertible)
|
||||||
|
|
||||||
|
def test_parse_to_double_convertible(self):
|
||||||
|
try_to_convert = partial(self._try_to_convert, cv.utils.dumpDouble)
|
||||||
|
min_float, max_float = get_limits(ctypes.c_float)
|
||||||
|
min_double, max_double = get_limits(ctypes.c_double)
|
||||||
|
for convertible in (2, -13, 1.24, np.float(32.45), float(2), np.double(12.23),
|
||||||
|
np.float32(-12.3), np.float64(3.22), np.float_(-1.5), min_float,
|
||||||
|
max_float, min_double, max_double, np.inf, -np.inf, float('Inf'),
|
||||||
|
-float('Inf'), np.double(np.inf), np.double(-np.inf),
|
||||||
|
np.double(float('Inf')), np.double(-float('Inf'))):
|
||||||
|
expected = 'Double: {0:.2f}'.format(convertible).lower()
|
||||||
|
actual = try_to_convert(convertible)
|
||||||
|
self.assertEqual(expected, actual,
|
||||||
|
msg=get_conversion_error_msg(convertible, expected, actual))
|
||||||
|
|
||||||
|
# Workaround for Windows NaN tests due to Visual C runtime
|
||||||
|
# special floating point values (indefinite NaN)
|
||||||
|
for nan in (float('NaN'), np.nan, np.double(np.nan),
|
||||||
|
np.double(float('NaN'))):
|
||||||
|
actual = try_to_convert(nan)
|
||||||
|
self.assertIn('nan', actual, msg="Can't convert nan of type {} to double. "
|
||||||
|
"Actual: {}".format(type(nan).__name__, actual))
|
||||||
|
|
||||||
|
def test_parse_to_double_not_convertible(self):
|
||||||
|
for not_convertible in ('s', 'str', (12,), [1, 2], None, np.array([1, 2], dtype=np.float),
|
||||||
|
np.array([1, 2], dtype=np.double), complex(1, 1), complex(imag=2),
|
||||||
|
complex(1.1)):
|
||||||
|
with self.assertRaises((TypeError), msg=get_no_exception_msg(not_convertible)):
|
||||||
|
_ = cv.utils.dumpDouble(not_convertible)
|
||||||
|
|
||||||
|
@unittest.skip('Wrong conversion behavior')
|
||||||
|
def test_parse_to_double_not_convertible_extra(self):
|
||||||
|
for not_convertible in (np.bool_(False), True, False, np.array([123, ], dtype=int),
|
||||||
|
np.array([1., ]), np.array([False]),
|
||||||
|
np.array([12.4], dtype=np.double), np.array([True], dtype=np.bool)):
|
||||||
|
with self.assertRaises((TypeError, OverflowError),
|
||||||
|
msg=get_no_exception_msg(not_convertible)):
|
||||||
|
_ = cv.utils.dumpDouble(not_convertible)
|
||||||
|
|
||||||
|
def test_parse_to_cstring_convertible(self):
|
||||||
|
try_to_convert = partial(self._try_to_convert, cv.utils.dumpCString)
|
||||||
|
for convertible in ('s', 'str', str(123), ('char'), np.str('test1'), np.str_('test2')):
|
||||||
|
expected = 'string: ' + convertible
|
||||||
|
actual = try_to_convert(convertible)
|
||||||
|
self.assertEqual(expected, actual,
|
||||||
|
msg=get_conversion_error_msg(convertible, expected, actual))
|
||||||
|
|
||||||
|
def test_parse_to_cstring_not_convertible(self):
|
||||||
|
for not_convertible in ((12,), ('t', 'e', 's', 't'), np.array(['123', ]),
|
||||||
|
np.array(['t', 'e', 's', 't']), 1, -1.4, True, False, None):
|
||||||
|
with self.assertRaises((TypeError), msg=get_no_exception_msg(not_convertible)):
|
||||||
|
_ = cv.utils.dumpCString(not_convertible)
|
||||||
|
|
||||||
|
|
||||||
class SamplesFindFile(NewOpenCVTests):
|
class SamplesFindFile(NewOpenCVTests):
|
||||||
|
|
||||||
|
@ -219,11 +219,46 @@ public:
|
|||||||
*/
|
*/
|
||||||
AffineWarper(float scale = 1.f) : PlaneWarper(scale) {}
|
AffineWarper(float scale = 1.f) : PlaneWarper(scale) {}
|
||||||
|
|
||||||
Point2f warpPoint(const Point2f &pt, InputArray K, InputArray R) CV_OVERRIDE;
|
/** @brief Projects the image point.
|
||||||
Rect buildMaps(Size src_size, InputArray K, InputArray R, CV_OUT OutputArray xmap, CV_OUT OutputArray ymap) CV_OVERRIDE;
|
|
||||||
Point warp(InputArray src, InputArray K, InputArray R,
|
@param pt Source point
|
||||||
int interp_mode, int border_mode, CV_OUT OutputArray dst) CV_OVERRIDE;
|
@param K Camera intrinsic parameters
|
||||||
Rect warpRoi(Size src_size, InputArray K, InputArray R) CV_OVERRIDE;
|
@param H Camera extrinsic parameters
|
||||||
|
@return Projected point
|
||||||
|
*/
|
||||||
|
Point2f warpPoint(const Point2f &pt, InputArray K, InputArray H) CV_OVERRIDE;
|
||||||
|
|
||||||
|
/** @brief Builds the projection maps according to the given camera data.
|
||||||
|
|
||||||
|
@param src_size Source image size
|
||||||
|
@param K Camera intrinsic parameters
|
||||||
|
@param H Camera extrinsic parameters
|
||||||
|
@param xmap Projection map for the x axis
|
||||||
|
@param ymap Projection map for the y axis
|
||||||
|
@return Projected image minimum bounding box
|
||||||
|
*/
|
||||||
|
Rect buildMaps(Size src_size, InputArray K, InputArray H, OutputArray xmap, OutputArray ymap) CV_OVERRIDE;
|
||||||
|
|
||||||
|
/** @brief Projects the image.
|
||||||
|
|
||||||
|
@param src Source image
|
||||||
|
@param K Camera intrinsic parameters
|
||||||
|
@param H Camera extrinsic parameters
|
||||||
|
@param interp_mode Interpolation mode
|
||||||
|
@param border_mode Border extrapolation mode
|
||||||
|
@param dst Projected image
|
||||||
|
@return Project image top-left corner
|
||||||
|
*/
|
||||||
|
Point warp(InputArray src, InputArray K, InputArray H,
|
||||||
|
int interp_mode, int border_mode, OutputArray dst) CV_OVERRIDE;
|
||||||
|
|
||||||
|
/**
|
||||||
|
@param src_size Source image bounding box
|
||||||
|
@param K Camera intrinsic parameters
|
||||||
|
@param H Camera extrinsic parameters
|
||||||
|
@return Projected image minimum bounding box
|
||||||
|
*/
|
||||||
|
Rect warpRoi(Size src_size, InputArray K, InputArray H) CV_OVERRIDE;
|
||||||
|
|
||||||
protected:
|
protected:
|
||||||
/** @brief Extracts rotation and translation matrices from matrix H representing
|
/** @brief Extracts rotation and translation matrices from matrix H representing
|
||||||
|
@ -180,12 +180,21 @@ using testing::tuple_size;
|
|||||||
using testing::tuple_element;
|
using testing::tuple_element;
|
||||||
|
|
||||||
|
|
||||||
class SkipTestException: public cv::Exception
|
namespace details {
|
||||||
|
class SkipTestExceptionBase: public cv::Exception
|
||||||
|
{
|
||||||
|
public:
|
||||||
|
SkipTestExceptionBase(bool handlingTags);
|
||||||
|
SkipTestExceptionBase(const cv::String& message, bool handlingTags);
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
class SkipTestException: public details::SkipTestExceptionBase
|
||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
int dummy; // workaround for MacOSX Xcode 7.3 bug (don't make class "empty")
|
int dummy; // workaround for MacOSX Xcode 7.3 bug (don't make class "empty")
|
||||||
SkipTestException() : dummy(0) {}
|
SkipTestException() : details::SkipTestExceptionBase(false), dummy(0) {}
|
||||||
SkipTestException(const cv::String& message) : dummy(0) { this->msg = message; }
|
SkipTestException(const cv::String& message) : details::SkipTestExceptionBase(message, false), dummy(0) { }
|
||||||
};
|
};
|
||||||
|
|
||||||
/** Apply tag to the current test
|
/** Apply tag to the current test
|
||||||
|
@ -16,6 +16,9 @@ extern int testThreads;
|
|||||||
|
|
||||||
void testSetUp();
|
void testSetUp();
|
||||||
void testTearDown();
|
void testTearDown();
|
||||||
|
|
||||||
|
bool checkBigDataTests();
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// check for required "opencv_test" namespace
|
// check for required "opencv_test" namespace
|
||||||
@ -37,7 +40,7 @@ void testTearDown();
|
|||||||
Body(); \
|
Body(); \
|
||||||
CV__TEST_CLEANUP \
|
CV__TEST_CLEANUP \
|
||||||
} \
|
} \
|
||||||
catch (const cvtest::SkipTestException& e) \
|
catch (const cvtest::details::SkipTestExceptionBase& e) \
|
||||||
{ \
|
{ \
|
||||||
printf("[ SKIP ] %s\n", e.what()); \
|
printf("[ SKIP ] %s\n", e.what()); \
|
||||||
} \
|
} \
|
||||||
@ -74,9 +77,8 @@ void testTearDown();
|
|||||||
|
|
||||||
#define CV__TEST_BIGDATA_BODY_IMPL(name) \
|
#define CV__TEST_BIGDATA_BODY_IMPL(name) \
|
||||||
{ \
|
{ \
|
||||||
if (!cvtest::runBigDataTests) \
|
if (!cvtest::checkBigDataTests()) \
|
||||||
{ \
|
{ \
|
||||||
printf("[ SKIP ] BigData tests are disabled\n"); \
|
|
||||||
return; \
|
return; \
|
||||||
} \
|
} \
|
||||||
CV__TRACE_APP_FUNCTION_NAME(name); \
|
CV__TRACE_APP_FUNCTION_NAME(name); \
|
||||||
@ -85,7 +87,7 @@ void testTearDown();
|
|||||||
Body(); \
|
Body(); \
|
||||||
CV__TEST_CLEANUP \
|
CV__TEST_CLEANUP \
|
||||||
} \
|
} \
|
||||||
catch (const cvtest::SkipTestException& e) \
|
catch (const cvtest::details::SkipTestExceptionBase& e) \
|
||||||
{ \
|
{ \
|
||||||
printf("[ SKIP ] %s\n", e.what()); \
|
printf("[ SKIP ] %s\n", e.what()); \
|
||||||
} \
|
} \
|
||||||
|
@ -386,7 +386,7 @@ public:
|
|||||||
static enum PERF_STRATEGY getCurrentModulePerformanceStrategy();
|
static enum PERF_STRATEGY getCurrentModulePerformanceStrategy();
|
||||||
static enum PERF_STRATEGY setModulePerformanceStrategy(enum PERF_STRATEGY strategy);
|
static enum PERF_STRATEGY setModulePerformanceStrategy(enum PERF_STRATEGY strategy);
|
||||||
|
|
||||||
class PerfSkipTestException: public cv::Exception
|
class PerfSkipTestException: public cvtest::SkipTestException
|
||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
int dummy; // workaround for MacOSX Xcode 7.3 bug (don't make class "empty")
|
int dummy; // workaround for MacOSX Xcode 7.3 bug (don't make class "empty")
|
||||||
@ -531,7 +531,7 @@ void PrintTo(const Size& sz, ::std::ostream* os);
|
|||||||
::cvtest::testSetUp(); \
|
::cvtest::testSetUp(); \
|
||||||
RunPerfTestBody(); \
|
RunPerfTestBody(); \
|
||||||
} \
|
} \
|
||||||
catch (cvtest::SkipTestException& e) \
|
catch (cvtest::details::SkipTestExceptionBase& e) \
|
||||||
{ \
|
{ \
|
||||||
printf("[ SKIP ] %s\n", e.what()); \
|
printf("[ SKIP ] %s\n", e.what()); \
|
||||||
} \
|
} \
|
||||||
|
@ -125,6 +125,20 @@ bool required_opencv_test_namespace = false; // compilation check for non-refac
|
|||||||
namespace cvtest
|
namespace cvtest
|
||||||
{
|
{
|
||||||
|
|
||||||
|
details::SkipTestExceptionBase::SkipTestExceptionBase(bool handlingTags)
|
||||||
|
{
|
||||||
|
if (!handlingTags)
|
||||||
|
{
|
||||||
|
testTagIncreaseSkipCount("skip_other", true, true);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
details::SkipTestExceptionBase::SkipTestExceptionBase(const cv::String& message, bool handlingTags)
|
||||||
|
{
|
||||||
|
if (!handlingTags)
|
||||||
|
testTagIncreaseSkipCount("skip_other", true, true);
|
||||||
|
this->msg = message;
|
||||||
|
}
|
||||||
|
|
||||||
uint64 param_seed = 0x12345678; // real value is passed via parseCustomOptions function
|
uint64 param_seed = 0x12345678; // real value is passed via parseCustomOptions function
|
||||||
|
|
||||||
static std::string path_join(const std::string& prefix, const std::string& subpath)
|
static std::string path_join(const std::string& prefix, const std::string& subpath)
|
||||||
@ -850,6 +864,17 @@ void testTearDown()
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bool checkBigDataTests()
|
||||||
|
{
|
||||||
|
if (!runBigDataTests)
|
||||||
|
{
|
||||||
|
testTagIncreaseSkipCount("skip_bigdata", true, true);
|
||||||
|
printf("[ SKIP ] BigData tests are disabled\n");
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
void parseCustomOptions(int argc, char **argv)
|
void parseCustomOptions(int argc, char **argv)
|
||||||
{
|
{
|
||||||
const string command_line_keys = string(
|
const string command_line_keys = string(
|
||||||
|
@ -2003,16 +2003,16 @@ void TestBase::RunPerfTestBody()
|
|||||||
implConf.GetImpl();
|
implConf.GetImpl();
|
||||||
#endif
|
#endif
|
||||||
}
|
}
|
||||||
catch(const SkipTestException&)
|
|
||||||
{
|
|
||||||
metrics.terminationReason = performance_metrics::TERM_SKIP_TEST;
|
|
||||||
throw;
|
|
||||||
}
|
|
||||||
catch(const PerfSkipTestException&)
|
catch(const PerfSkipTestException&)
|
||||||
{
|
{
|
||||||
metrics.terminationReason = performance_metrics::TERM_SKIP_TEST;
|
metrics.terminationReason = performance_metrics::TERM_SKIP_TEST;
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
catch(const cvtest::details::SkipTestExceptionBase&)
|
||||||
|
{
|
||||||
|
metrics.terminationReason = performance_metrics::TERM_SKIP_TEST;
|
||||||
|
throw;
|
||||||
|
}
|
||||||
catch(const PerfEarlyExitException&)
|
catch(const PerfEarlyExitException&)
|
||||||
{
|
{
|
||||||
metrics.terminationReason = performance_metrics::TERM_INTERRUPT;
|
metrics.terminationReason = performance_metrics::TERM_INTERRUPT;
|
||||||
|
@ -23,8 +23,10 @@ static std::map<std::string, int>& getTestTagsSkipExtraCounts()
|
|||||||
static std::map<std::string, int> testTagsSkipExtraCounts;
|
static std::map<std::string, int> testTagsSkipExtraCounts;
|
||||||
return testTagsSkipExtraCounts;
|
return testTagsSkipExtraCounts;
|
||||||
}
|
}
|
||||||
static void increaseTagsSkipCount(const std::string& tag, bool isMain)
|
void testTagIncreaseSkipCount(const std::string& tag, bool isMain, bool appendSkipTests)
|
||||||
{
|
{
|
||||||
|
if (appendSkipTests)
|
||||||
|
skipped_tests.push_back(::testing::UnitTest::GetInstance()->current_test_info());
|
||||||
std::map<std::string, int>& counts = isMain ? getTestTagsSkipCounts() : getTestTagsSkipExtraCounts();
|
std::map<std::string, int>& counts = isMain ? getTestTagsSkipCounts() : getTestTagsSkipExtraCounts();
|
||||||
std::map<std::string, int>::iterator i = counts.find(tag);
|
std::map<std::string, int>::iterator i = counts.find(tag);
|
||||||
if (i == counts.end())
|
if (i == counts.end())
|
||||||
@ -192,11 +194,14 @@ public:
|
|||||||
{
|
{
|
||||||
if (!skipped_tests.empty())
|
if (!skipped_tests.empty())
|
||||||
{
|
{
|
||||||
std::cout << "[ SKIPSTAT ] " << skipped_tests.size() << " tests via tags" << std::endl;
|
std::cout << "[ SKIPSTAT ] " << skipped_tests.size() << " tests skipped" << std::endl;
|
||||||
const std::vector<std::string>& skipTags = getTestTagsSkipList();
|
const std::vector<std::string>& skipTags = getTestTagsSkipList();
|
||||||
const std::map<std::string, int>& counts = getTestTagsSkipCounts();
|
const std::map<std::string, int>& counts = getTestTagsSkipCounts();
|
||||||
const std::map<std::string, int>& countsExtra = getTestTagsSkipExtraCounts();
|
const std::map<std::string, int>& countsExtra = getTestTagsSkipExtraCounts();
|
||||||
for (std::vector<std::string>::const_iterator i = skipTags.begin(); i != skipTags.end(); ++i)
|
std::vector<std::string> skipTags_all = skipTags;
|
||||||
|
skipTags_all.push_back("skip_bigdata");
|
||||||
|
skipTags_all.push_back("skip_other");
|
||||||
|
for (std::vector<std::string>::const_iterator i = skipTags_all.begin(); i != skipTags_all.end(); ++i)
|
||||||
{
|
{
|
||||||
int c1 = 0;
|
int c1 = 0;
|
||||||
std::map<std::string, int>::const_iterator i1 = counts.find(*i);
|
std::map<std::string, int>::const_iterator i1 = counts.find(*i);
|
||||||
@ -301,7 +306,7 @@ void checkTestTags()
|
|||||||
if (found != tags.size())
|
if (found != tags.size())
|
||||||
{
|
{
|
||||||
skipped_tests.push_back(::testing::UnitTest::GetInstance()->current_test_info());
|
skipped_tests.push_back(::testing::UnitTest::GetInstance()->current_test_info());
|
||||||
throw SkipTestException("Test tags don't pass required tags list (--test_tag parameter)");
|
throw details::SkipTestExceptionBase("Test tags don't pass required tags list (--test_tag parameter)", true);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -317,7 +322,7 @@ void checkTestTags()
|
|||||||
const std::string& testTag = testTags[i];
|
const std::string& testTag = testTags[i];
|
||||||
if (isTestTagSkipped(testTag, skipTag))
|
if (isTestTagSkipped(testTag, skipTag))
|
||||||
{
|
{
|
||||||
increaseTagsSkipCount(skipTag, skip_message.empty());
|
testTagIncreaseSkipCount(skipTag, skip_message.empty());
|
||||||
if (skip_message.empty()) skip_message = "Test with tag '" + testTag + "' is skipped ('" + skipTag + "' is in skip list)";
|
if (skip_message.empty()) skip_message = "Test with tag '" + testTag + "' is skipped ('" + skipTag + "' is in skip list)";
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -327,7 +332,7 @@ void checkTestTags()
|
|||||||
const std::string& testTag = testTagsImplied[i];
|
const std::string& testTag = testTagsImplied[i];
|
||||||
if (isTestTagSkipped(testTag, skipTag))
|
if (isTestTagSkipped(testTag, skipTag))
|
||||||
{
|
{
|
||||||
increaseTagsSkipCount(skipTag, skip_message.empty());
|
testTagIncreaseSkipCount(skipTag, skip_message.empty());
|
||||||
if (skip_message.empty()) skip_message = "Test with tag '" + testTag + "' is skipped (implied '" + skipTag + "' is in skip list)";
|
if (skip_message.empty()) skip_message = "Test with tag '" + testTag + "' is skipped (implied '" + skipTag + "' is in skip list)";
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -335,7 +340,7 @@ void checkTestTags()
|
|||||||
if (!skip_message.empty())
|
if (!skip_message.empty())
|
||||||
{
|
{
|
||||||
skipped_tests.push_back(::testing::UnitTest::GetInstance()->current_test_info());
|
skipped_tests.push_back(::testing::UnitTest::GetInstance()->current_test_info());
|
||||||
throw SkipTestException(skip_message);
|
throw details::SkipTestExceptionBase(skip_message, true);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -21,6 +21,8 @@ namespace cvtest {
|
|||||||
|
|
||||||
void activateTestTags(const cv::CommandLineParser& parser);
|
void activateTestTags(const cv::CommandLineParser& parser);
|
||||||
|
|
||||||
|
void testTagIncreaseSkipCount(const std::string& tag, bool isMain = true, bool appendSkipTests = false);
|
||||||
|
|
||||||
} // namespace
|
} // namespace
|
||||||
|
|
||||||
#endif // OPENCV_TS_SRC_TAGS_HPP
|
#endif // OPENCV_TS_SRC_TAGS_HPP
|
||||||
|
@ -284,6 +284,32 @@ typedef uint32_t __u32;
|
|||||||
|
|
||||||
namespace cv {
|
namespace cv {
|
||||||
|
|
||||||
|
static const char* decode_ioctl_code(unsigned long ioctlCode)
|
||||||
|
{
|
||||||
|
switch (ioctlCode)
|
||||||
|
{
|
||||||
|
#define CV_ADD_IOCTL_CODE(id) case id: return #id
|
||||||
|
CV_ADD_IOCTL_CODE(VIDIOC_G_FMT);
|
||||||
|
CV_ADD_IOCTL_CODE(VIDIOC_S_FMT);
|
||||||
|
CV_ADD_IOCTL_CODE(VIDIOC_REQBUFS);
|
||||||
|
CV_ADD_IOCTL_CODE(VIDIOC_DQBUF);
|
||||||
|
CV_ADD_IOCTL_CODE(VIDIOC_QUERYCAP);
|
||||||
|
CV_ADD_IOCTL_CODE(VIDIOC_S_PARM);
|
||||||
|
CV_ADD_IOCTL_CODE(VIDIOC_G_PARM);
|
||||||
|
CV_ADD_IOCTL_CODE(VIDIOC_QUERYBUF);
|
||||||
|
CV_ADD_IOCTL_CODE(VIDIOC_QBUF);
|
||||||
|
CV_ADD_IOCTL_CODE(VIDIOC_STREAMON);
|
||||||
|
CV_ADD_IOCTL_CODE(VIDIOC_STREAMOFF);
|
||||||
|
CV_ADD_IOCTL_CODE(VIDIOC_ENUMINPUT);
|
||||||
|
CV_ADD_IOCTL_CODE(VIDIOC_G_INPUT);
|
||||||
|
CV_ADD_IOCTL_CODE(VIDIOC_S_INPUT);
|
||||||
|
CV_ADD_IOCTL_CODE(VIDIOC_G_CTRL);
|
||||||
|
CV_ADD_IOCTL_CODE(VIDIOC_S_CTRL);
|
||||||
|
#undef CV_ADD_IOCTL_CODE
|
||||||
|
}
|
||||||
|
return "unknown";
|
||||||
|
}
|
||||||
|
|
||||||
/* Device Capture Objects */
|
/* Device Capture Objects */
|
||||||
/* V4L2 structure */
|
/* V4L2 structure */
|
||||||
struct Buffer
|
struct Buffer
|
||||||
@ -305,6 +331,9 @@ struct CvCaptureCAM_V4L CV_FINAL : public CvCapture
|
|||||||
int getCaptureDomain() /*const*/ CV_OVERRIDE { return cv::CAP_V4L; }
|
int getCaptureDomain() /*const*/ CV_OVERRIDE { return cv::CAP_V4L; }
|
||||||
|
|
||||||
int deviceHandle;
|
int deviceHandle;
|
||||||
|
bool v4l_buffersRequested;
|
||||||
|
bool v4l_streamStarted;
|
||||||
|
|
||||||
int bufferIndex;
|
int bufferIndex;
|
||||||
bool FirstCapture;
|
bool FirstCapture;
|
||||||
String deviceName;
|
String deviceName;
|
||||||
@ -345,6 +374,8 @@ struct CvCaptureCAM_V4L CV_FINAL : public CvCapture
|
|||||||
bool open(const char* deviceName);
|
bool open(const char* deviceName);
|
||||||
bool isOpened() const;
|
bool isOpened() const;
|
||||||
|
|
||||||
|
void closeDevice();
|
||||||
|
|
||||||
virtual double getProperty(int) const CV_OVERRIDE;
|
virtual double getProperty(int) const CV_OVERRIDE;
|
||||||
virtual bool setProperty(int, double) CV_OVERRIDE;
|
virtual bool setProperty(int, double) CV_OVERRIDE;
|
||||||
virtual bool grabFrame() CV_OVERRIDE;
|
virtual bool grabFrame() CV_OVERRIDE;
|
||||||
@ -381,7 +412,10 @@ struct CvCaptureCAM_V4L CV_FINAL : public CvCapture
|
|||||||
/*********************** Implementations ***************************************/
|
/*********************** Implementations ***************************************/
|
||||||
|
|
||||||
CvCaptureCAM_V4L::CvCaptureCAM_V4L() :
|
CvCaptureCAM_V4L::CvCaptureCAM_V4L() :
|
||||||
deviceHandle(-1), bufferIndex(-1),
|
deviceHandle(-1),
|
||||||
|
v4l_buffersRequested(false),
|
||||||
|
v4l_streamStarted(false),
|
||||||
|
bufferIndex(-1),
|
||||||
FirstCapture(true),
|
FirstCapture(true),
|
||||||
palette(0),
|
palette(0),
|
||||||
width(0), height(0), width_set(0), height_set(0),
|
width(0), height(0), width_set(0), height_set(0),
|
||||||
@ -395,11 +429,32 @@ CvCaptureCAM_V4L::CvCaptureCAM_V4L() :
|
|||||||
memset(×tamp, 0, sizeof(timestamp));
|
memset(×tamp, 0, sizeof(timestamp));
|
||||||
}
|
}
|
||||||
|
|
||||||
CvCaptureCAM_V4L::~CvCaptureCAM_V4L() {
|
CvCaptureCAM_V4L::~CvCaptureCAM_V4L()
|
||||||
streaming(false);
|
{
|
||||||
releaseBuffers();
|
try
|
||||||
|
{
|
||||||
|
closeDevice();
|
||||||
|
}
|
||||||
|
catch (...)
|
||||||
|
{
|
||||||
|
CV_LOG_WARNING(NULL, "VIDEOIO(V4L2): unable properly close device: " << deviceName);
|
||||||
|
if (deviceHandle != -1)
|
||||||
|
close(deviceHandle);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
void CvCaptureCAM_V4L::closeDevice()
|
||||||
|
{
|
||||||
|
if (v4l_streamStarted)
|
||||||
|
streaming(false);
|
||||||
|
if (v4l_buffersRequested)
|
||||||
|
releaseBuffers();
|
||||||
if(deviceHandle != -1)
|
if(deviceHandle != -1)
|
||||||
|
{
|
||||||
|
CV_LOG_DEBUG(NULL, "VIDEOIO(V4L2:" << deviceName << "): close(" << deviceHandle << ")");
|
||||||
close(deviceHandle);
|
close(deviceHandle);
|
||||||
|
}
|
||||||
|
deviceHandle = -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
bool CvCaptureCAM_V4L::isOpened() const
|
bool CvCaptureCAM_V4L::isOpened() const
|
||||||
@ -415,7 +470,7 @@ bool CvCaptureCAM_V4L::try_palette_v4l2()
|
|||||||
form.fmt.pix.field = V4L2_FIELD_ANY;
|
form.fmt.pix.field = V4L2_FIELD_ANY;
|
||||||
form.fmt.pix.width = width;
|
form.fmt.pix.width = width;
|
||||||
form.fmt.pix.height = height;
|
form.fmt.pix.height = height;
|
||||||
if (!tryIoctl(VIDIOC_S_FMT, &form))
|
if (!tryIoctl(VIDIOC_S_FMT, &form, true))
|
||||||
{
|
{
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
@ -460,9 +515,7 @@ bool CvCaptureCAM_V4L::try_init_v4l2()
|
|||||||
// The cv::CAP_PROP_MODE used for set the video input channel number
|
// The cv::CAP_PROP_MODE used for set the video input channel number
|
||||||
if (!setVideoInputChannel())
|
if (!setVideoInputChannel())
|
||||||
{
|
{
|
||||||
#ifndef NDEBUG
|
CV_LOG_DEBUG(NULL, "VIDEOIO(V4L2:" << deviceName << "): Unable to set Video Input Channel");
|
||||||
fprintf(stderr, "(DEBUG) V4L2: Unable to set Video Input Channel.");
|
|
||||||
#endif
|
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -470,16 +523,14 @@ bool CvCaptureCAM_V4L::try_init_v4l2()
|
|||||||
capability = v4l2_capability();
|
capability = v4l2_capability();
|
||||||
if (!tryIoctl(VIDIOC_QUERYCAP, &capability))
|
if (!tryIoctl(VIDIOC_QUERYCAP, &capability))
|
||||||
{
|
{
|
||||||
#ifndef NDEBUG
|
CV_LOG_DEBUG(NULL, "VIDEOIO(V4L2:" << deviceName << "): Unable to query capability");
|
||||||
fprintf(stderr, "(DEBUG) V4L2: Unable to query capability.");
|
|
||||||
#endif
|
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
if ((capability.capabilities & V4L2_CAP_VIDEO_CAPTURE) == 0)
|
if ((capability.capabilities & V4L2_CAP_VIDEO_CAPTURE) == 0)
|
||||||
{
|
{
|
||||||
/* Nope. */
|
/* Nope. */
|
||||||
fprintf(stderr, "VIDEOIO ERROR: V4L2: Unable to capture video memory.");
|
CV_LOG_INFO(NULL, "VIDEOIO(V4L2:" << deviceName << "): not supported - device is unable to capture video (missing V4L2_CAP_VIDEO_CAPTURE)");
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
return true;
|
return true;
|
||||||
@ -488,10 +539,18 @@ bool CvCaptureCAM_V4L::try_init_v4l2()
|
|||||||
bool CvCaptureCAM_V4L::autosetup_capture_mode_v4l2()
|
bool CvCaptureCAM_V4L::autosetup_capture_mode_v4l2()
|
||||||
{
|
{
|
||||||
//in case palette is already set and works, no need to setup.
|
//in case palette is already set and works, no need to setup.
|
||||||
if (palette != 0 && try_palette_v4l2()) {
|
if (palette != 0)
|
||||||
return true;
|
{
|
||||||
} else if (errno == EBUSY) {
|
if (try_palette_v4l2())
|
||||||
return false;
|
{
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
else if (errno == EBUSY)
|
||||||
|
{
|
||||||
|
CV_LOG_INFO(NULL, "VIDEOIO(V4L2:" << deviceName << "): device is busy");
|
||||||
|
closeDevice();
|
||||||
|
return false;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
__u32 try_order[] = {
|
__u32 try_order[] = {
|
||||||
V4L2_PIX_FMT_BGR24,
|
V4L2_PIX_FMT_BGR24,
|
||||||
@ -519,6 +578,10 @@ bool CvCaptureCAM_V4L::autosetup_capture_mode_v4l2()
|
|||||||
palette = try_order[i];
|
palette = try_order[i];
|
||||||
if (try_palette_v4l2()) {
|
if (try_palette_v4l2()) {
|
||||||
return true;
|
return true;
|
||||||
|
} else if (errno == EBUSY) {
|
||||||
|
CV_LOG_INFO(NULL, "VIDEOIO(V4L2:" << deviceName << "): device is busy");
|
||||||
|
closeDevice();
|
||||||
|
return false;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return false;
|
return false;
|
||||||
@ -534,9 +597,15 @@ bool CvCaptureCAM_V4L::setFps(int value)
|
|||||||
streamparm.parm.capture.timeperframe.numerator = 1;
|
streamparm.parm.capture.timeperframe.numerator = 1;
|
||||||
streamparm.parm.capture.timeperframe.denominator = __u32(value);
|
streamparm.parm.capture.timeperframe.denominator = __u32(value);
|
||||||
if (!tryIoctl(VIDIOC_S_PARM, &streamparm) || !tryIoctl(VIDIOC_G_PARM, &streamparm))
|
if (!tryIoctl(VIDIOC_S_PARM, &streamparm) || !tryIoctl(VIDIOC_G_PARM, &streamparm))
|
||||||
|
{
|
||||||
|
CV_LOG_INFO(NULL, "VIDEOIO(V4L2:" << deviceName << "): can't set FPS: " << value);
|
||||||
return false;
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
fps = streamparm.parm.capture.timeperframe.denominator;
|
CV_LOG_DEBUG(NULL, "VIDEOIO(V4L2:" << deviceName << "): FPS="
|
||||||
|
<< streamparm.parm.capture.timeperframe.denominator << "/"
|
||||||
|
<< streamparm.parm.capture.timeperframe.numerator);
|
||||||
|
fps = streamparm.parm.capture.timeperframe.denominator; // TODO use numerator
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -631,10 +700,9 @@ bool CvCaptureCAM_V4L::initCapture()
|
|||||||
if (!isOpened())
|
if (!isOpened())
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
if (!try_init_v4l2()) {
|
if (!try_init_v4l2())
|
||||||
#ifndef NDEBUG
|
{
|
||||||
fprintf(stderr, " try_init_v4l2 open \"%s\": %s\n", deviceName.c_str(), strerror(errno));
|
CV_LOG_DEBUG(NULL, "VIDEOIO(V4L2:" << deviceName << "): init failed: errno=" << errno << " (" << strerror(errno) << ")");
|
||||||
#endif
|
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -642,14 +710,17 @@ bool CvCaptureCAM_V4L::initCapture()
|
|||||||
form = v4l2_format();
|
form = v4l2_format();
|
||||||
form.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
|
form.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
|
||||||
|
|
||||||
if (!tryIoctl(VIDIOC_G_FMT, &form)) {
|
if (!tryIoctl(VIDIOC_G_FMT, &form))
|
||||||
fprintf( stderr, "VIDEOIO ERROR: V4L2: Could not obtain specifics of capture window.\n");
|
{
|
||||||
|
CV_LOG_DEBUG(NULL, "VIDEOIO(V4L2:" << deviceName << "): Could not obtain specifics of capture window (VIDIOC_G_FMT): errno=" << errno << " (" << strerror(errno) << ")");
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!autosetup_capture_mode_v4l2()) {
|
if (!autosetup_capture_mode_v4l2())
|
||||||
if (errno != EBUSY) {
|
{
|
||||||
fprintf(stderr, "VIDEOIO ERROR: V4L2: Pixel format of incoming image is unsupported by OpenCV\n");
|
if (errno != EBUSY)
|
||||||
|
{
|
||||||
|
CV_LOG_INFO(NULL, "VIDEOIO(V4L2:" << deviceName << "): Pixel format of incoming image is unsupported by OpenCV");
|
||||||
}
|
}
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
@ -691,16 +762,16 @@ bool CvCaptureCAM_V4L::requestBuffers()
|
|||||||
{
|
{
|
||||||
unsigned int buffer_number = bufferSize;
|
unsigned int buffer_number = bufferSize;
|
||||||
while (buffer_number > 0) {
|
while (buffer_number > 0) {
|
||||||
if (!requestBuffers(buffer_number))
|
if (requestBuffers(buffer_number) && req.count >= buffer_number)
|
||||||
return false;
|
{
|
||||||
if (req.count >= buffer_number)
|
|
||||||
break;
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
buffer_number--;
|
buffer_number--;
|
||||||
fprintf(stderr, "Insufficient buffer memory on %s -- decreasing buffers\n", deviceName.c_str());
|
CV_LOG_DEBUG(NULL, "VIDEOIO(V4L2:" << deviceName << "): Insufficient buffer memory -- decreasing buffers: " << buffer_number);
|
||||||
}
|
}
|
||||||
if (buffer_number < 1) {
|
if (buffer_number < 1) {
|
||||||
fprintf(stderr, "Insufficient buffer memory on %s\n", deviceName.c_str());
|
CV_LOG_WARNING(NULL, "VIDEOIO(V4L2:" << deviceName << "): Insufficient buffer memory");
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
bufferSize = req.count;
|
bufferSize = req.count;
|
||||||
@ -718,13 +789,18 @@ bool CvCaptureCAM_V4L::requestBuffers(unsigned int buffer_number)
|
|||||||
req.memory = V4L2_MEMORY_MMAP;
|
req.memory = V4L2_MEMORY_MMAP;
|
||||||
|
|
||||||
if (!tryIoctl(VIDIOC_REQBUFS, &req)) {
|
if (!tryIoctl(VIDIOC_REQBUFS, &req)) {
|
||||||
if (EINVAL == errno) {
|
int err = errno;
|
||||||
fprintf(stderr, "%s does not support memory mapping\n", deviceName.c_str());
|
if (EINVAL == err)
|
||||||
} else {
|
{
|
||||||
perror("VIDIOC_REQBUFS");
|
CV_LOG_WARNING(NULL, "VIDEOIO(V4L2:" << deviceName << "): no support for memory mapping");
|
||||||
|
}
|
||||||
|
else
|
||||||
|
{
|
||||||
|
CV_LOG_WARNING(NULL, "VIDEOIO(V4L2:" << deviceName << "): failed VIDIOC_REQBUFS: errno=" << err << " (" << strerror(err) << ")");
|
||||||
}
|
}
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
v4l_buffersRequested = true;
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -738,7 +814,7 @@ bool CvCaptureCAM_V4L::createBuffers()
|
|||||||
buf.index = n_buffers;
|
buf.index = n_buffers;
|
||||||
|
|
||||||
if (!tryIoctl(VIDIOC_QUERYBUF, &buf)) {
|
if (!tryIoctl(VIDIOC_QUERYBUF, &buf)) {
|
||||||
perror("VIDIOC_QUERYBUF");
|
CV_LOG_WARNING(NULL, "VIDEOIO(V4L2:" << deviceName << "): failed VIDIOC_QUERYBUF: errno=" << errno << " (" << strerror(errno) << ")");
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -751,7 +827,7 @@ bool CvCaptureCAM_V4L::createBuffers()
|
|||||||
deviceHandle, buf.m.offset);
|
deviceHandle, buf.m.offset);
|
||||||
|
|
||||||
if (MAP_FAILED == buffers[n_buffers].start) {
|
if (MAP_FAILED == buffers[n_buffers].start) {
|
||||||
perror("mmap");
|
CV_LOG_WARNING(NULL, "VIDEOIO(V4L2:" << deviceName << "): failed mmap(" << buf.length << "): errno=" << errno << " (" << strerror(errno) << ")");
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
maxLength = maxLength > buf.length ? maxLength : buf.length;
|
maxLength = maxLength > buf.length ? maxLength : buf.length;
|
||||||
@ -795,7 +871,7 @@ bool CvCaptureCAM_V4L::open(int _index)
|
|||||||
}
|
}
|
||||||
if (_index < 0)
|
if (_index < 0)
|
||||||
{
|
{
|
||||||
fprintf(stderr, "VIDEOIO ERROR: V4L: can't find camera device\n");
|
CV_LOG_WARNING(NULL, "VIDEOIO(V4L2): can't find camera device");
|
||||||
name.clear();
|
name.clear();
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
@ -808,16 +884,15 @@ bool CvCaptureCAM_V4L::open(int _index)
|
|||||||
bool res = open(name.c_str());
|
bool res = open(name.c_str());
|
||||||
if (!res)
|
if (!res)
|
||||||
{
|
{
|
||||||
CV_LOG_WARNING(NULL, cv::format("VIDEOIO ERROR: V4L: can't open camera by index %d", _index));
|
CV_LOG_WARNING(NULL, "VIDEOIO(V4L2:" << deviceName << "): can't open camera by index");
|
||||||
}
|
}
|
||||||
return res;
|
return res;
|
||||||
}
|
}
|
||||||
|
|
||||||
bool CvCaptureCAM_V4L::open(const char* _deviceName)
|
bool CvCaptureCAM_V4L::open(const char* _deviceName)
|
||||||
{
|
{
|
||||||
#ifndef NDEBUG
|
CV_Assert(_deviceName);
|
||||||
fprintf(stderr, "(DEBUG) V4L: opening %s\n", _deviceName);
|
CV_LOG_DEBUG(NULL, "VIDEOIO(V4L2:" << _deviceName << "): opening...");
|
||||||
#endif
|
|
||||||
FirstCapture = true;
|
FirstCapture = true;
|
||||||
width = DEFAULT_V4L_WIDTH;
|
width = DEFAULT_V4L_WIDTH;
|
||||||
height = DEFAULT_V4L_HEIGHT;
|
height = DEFAULT_V4L_HEIGHT;
|
||||||
@ -833,6 +908,7 @@ bool CvCaptureCAM_V4L::open(const char* _deviceName)
|
|||||||
bufferIndex = -1;
|
bufferIndex = -1;
|
||||||
|
|
||||||
deviceHandle = ::open(deviceName.c_str(), O_RDWR /* required */ | O_NONBLOCK, 0);
|
deviceHandle = ::open(deviceName.c_str(), O_RDWR /* required */ | O_NONBLOCK, 0);
|
||||||
|
CV_LOG_DEBUG(NULL, "VIDEOIO(V4L2:" << _deviceName << "): deviceHandle=" << deviceHandle);
|
||||||
if (deviceHandle == -1)
|
if (deviceHandle == -1)
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
@ -846,7 +922,8 @@ bool CvCaptureCAM_V4L::read_frame_v4l2()
|
|||||||
buf.memory = V4L2_MEMORY_MMAP;
|
buf.memory = V4L2_MEMORY_MMAP;
|
||||||
|
|
||||||
while (!tryIoctl(VIDIOC_DQBUF, &buf)) {
|
while (!tryIoctl(VIDIOC_DQBUF, &buf)) {
|
||||||
if (errno == EIO && !(buf.flags & (V4L2_BUF_FLAG_QUEUED | V4L2_BUF_FLAG_DONE))) {
|
int err = errno;
|
||||||
|
if (err == EIO && !(buf.flags & (V4L2_BUF_FLAG_QUEUED | V4L2_BUF_FLAG_DONE))) {
|
||||||
// Maybe buffer not in the queue? Try to put there
|
// Maybe buffer not in the queue? Try to put there
|
||||||
if (!tryIoctl(VIDIOC_QBUF, &buf))
|
if (!tryIoctl(VIDIOC_QBUF, &buf))
|
||||||
return false;
|
return false;
|
||||||
@ -854,7 +931,7 @@ bool CvCaptureCAM_V4L::read_frame_v4l2()
|
|||||||
}
|
}
|
||||||
/* display the error and stop processing */
|
/* display the error and stop processing */
|
||||||
returnFrame = false;
|
returnFrame = false;
|
||||||
perror("VIDIOC_DQBUF");
|
CV_LOG_DEBUG(NULL, "VIDEOIO(V4L2:" << deviceName << "): can't read frame (VIDIOC_DQBUF): errno=" << err << " (" << strerror(err) << ")");
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -872,38 +949,64 @@ bool CvCaptureCAM_V4L::read_frame_v4l2()
|
|||||||
|
|
||||||
bool CvCaptureCAM_V4L::tryIoctl(unsigned long ioctlCode, void *parameter, bool failIfBusy, int attempts) const
|
bool CvCaptureCAM_V4L::tryIoctl(unsigned long ioctlCode, void *parameter, bool failIfBusy, int attempts) const
|
||||||
{
|
{
|
||||||
CV_LOG_DEBUG(NULL, "tryIoctl(handle=" << deviceHandle << ", ioctl=0x" << std::hex << ioctlCode << ", ...)");
|
CV_Assert(attempts > 0);
|
||||||
if (attempts == 0) {
|
CV_LOG_DEBUG(NULL, "VIDEOIO(V4L2:" << deviceName << "): tryIoctl(" << deviceHandle << ", "
|
||||||
return false;
|
<< decode_ioctl_code(ioctlCode) << "(" << ioctlCode << "), failIfBusy=" << failIfBusy << ")"
|
||||||
}
|
);
|
||||||
while (-1 == ioctl(deviceHandle, ioctlCode, parameter)) {
|
while (true)
|
||||||
const bool isBusy = (errno == EBUSY);
|
{
|
||||||
if (isBusy & failIfBusy) {
|
errno = 0;
|
||||||
return false;
|
int result = ioctl(deviceHandle, ioctlCode, parameter);
|
||||||
}
|
int err = errno;
|
||||||
if ((attempts > 0) && (--attempts == 0)) {
|
CV_LOG_DEBUG(NULL, "VIDEOIO(V4L2:" << deviceName << "): call ioctl(" << deviceHandle << ", "
|
||||||
return false;
|
<< decode_ioctl_code(ioctlCode) << "(" << ioctlCode << "), ...) => "
|
||||||
}
|
<< result << " errno=" << err << " (" << strerror(err) << ")"
|
||||||
|
);
|
||||||
|
|
||||||
|
if (result != -1)
|
||||||
|
return true; // success
|
||||||
|
|
||||||
|
const bool isBusy = (err == EBUSY);
|
||||||
|
if (isBusy && failIfBusy)
|
||||||
|
{
|
||||||
|
CV_LOG_INFO(NULL, "VIDEOIO(V4L2:" << deviceName << "): ioctl returns with errno=EBUSY");
|
||||||
|
return false;
|
||||||
|
}
|
||||||
if (!(isBusy || errno == EAGAIN))
|
if (!(isBusy || errno == EAGAIN))
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
|
if (--attempts == 0) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
fd_set fds;
|
fd_set fds;
|
||||||
FD_ZERO(&fds);
|
FD_ZERO(&fds);
|
||||||
FD_SET(deviceHandle, &fds);
|
FD_SET(deviceHandle, &fds);
|
||||||
|
|
||||||
/* Timeout. */
|
/* Timeout. */
|
||||||
|
static int param_v4l_select_timeout = (int)utils::getConfigurationParameterSizeT("OPENCV_VIDEOIO_V4L_SELECT_TIMEOUT", 10);
|
||||||
struct timeval tv;
|
struct timeval tv;
|
||||||
tv.tv_sec = 10;
|
tv.tv_sec = param_v4l_select_timeout;
|
||||||
tv.tv_usec = 0;
|
tv.tv_usec = 0;
|
||||||
|
|
||||||
int result = select(deviceHandle + 1, &fds, NULL, NULL, &tv);
|
errno = 0;
|
||||||
if (0 == result) {
|
result = select(deviceHandle + 1, &fds, NULL, NULL, &tv);
|
||||||
fprintf(stderr, "select timeout\n");
|
err = errno;
|
||||||
|
|
||||||
|
if (0 == result)
|
||||||
|
{
|
||||||
|
CV_LOG_WARNING(NULL, "VIDEOIO(V4L2:" << deviceName << "): select() timeout.");
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
CV_LOG_DEBUG(NULL, "VIDEOIO(V4L2:" << deviceName << "): select(" << deviceHandle << ") => "
|
||||||
|
<< result << " errno = " << err << " (" << strerror(err) << ")"
|
||||||
|
);
|
||||||
|
|
||||||
|
if (EINTR == err) // don't loop if signal occurred, like Ctrl+C
|
||||||
|
{
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
if (-1 == result && EINTR != errno)
|
|
||||||
perror("select");
|
|
||||||
}
|
}
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
@ -930,14 +1033,12 @@ bool CvCaptureCAM_V4L::grabFrame()
|
|||||||
buf.index = index;
|
buf.index = index;
|
||||||
|
|
||||||
if (!tryIoctl(VIDIOC_QBUF, &buf)) {
|
if (!tryIoctl(VIDIOC_QBUF, &buf)) {
|
||||||
perror("VIDIOC_QBUF");
|
CV_LOG_DEBUG(NULL, "VIDEOIO(V4L2:" << deviceName << "): failed VIDIOC_QBUF (buffer=" << index << "): errno=" << errno << " (" << strerror(errno) << ")");
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if(!streaming(true)) {
|
if (!streaming(true)) {
|
||||||
/* error enabling the stream */
|
|
||||||
perror("VIDIOC_STREAMON");
|
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -952,9 +1053,12 @@ bool CvCaptureCAM_V4L::grabFrame()
|
|||||||
FirstCapture = false;
|
FirstCapture = false;
|
||||||
}
|
}
|
||||||
// In the case that the grab frame was without retrieveFrame
|
// In the case that the grab frame was without retrieveFrame
|
||||||
if (bufferIndex >= 0) {
|
if (bufferIndex >= 0)
|
||||||
|
{
|
||||||
if (!tryIoctl(VIDIOC_QBUF, &buffers[bufferIndex].buffer))
|
if (!tryIoctl(VIDIOC_QBUF, &buffers[bufferIndex].buffer))
|
||||||
perror("VIDIOC_QBUF");
|
{
|
||||||
|
CV_LOG_DEBUG(NULL, "VIDEOIO(V4L2:" << deviceName << "): failed VIDIOC_QBUF (buffer=" << bufferIndex << "): errno=" << errno << " (" << strerror(errno) << ")");
|
||||||
|
}
|
||||||
}
|
}
|
||||||
return read_frame_v4l2();
|
return read_frame_v4l2();
|
||||||
}
|
}
|
||||||
@ -1469,6 +1573,7 @@ void CvCaptureCAM_V4L::convertToRgb(const Buffer ¤tBuffer)
|
|||||||
#ifdef HAVE_JPEG
|
#ifdef HAVE_JPEG
|
||||||
case V4L2_PIX_FMT_MJPEG:
|
case V4L2_PIX_FMT_MJPEG:
|
||||||
case V4L2_PIX_FMT_JPEG:
|
case V4L2_PIX_FMT_JPEG:
|
||||||
|
CV_LOG_DEBUG(NULL, "VIDEOIO(V4L2:" << deviceName << "): decoding JPEG frame: size=" << currentBuffer.buffer.bytesused);
|
||||||
cv::imdecode(Mat(1, currentBuffer.buffer.bytesused, CV_8U, currentBuffer.start), IMREAD_COLOR, &destination);
|
cv::imdecode(Mat(1, currentBuffer.buffer.bytesused, CV_8U, currentBuffer.start), IMREAD_COLOR, &destination);
|
||||||
return;
|
return;
|
||||||
#endif
|
#endif
|
||||||
@ -1711,7 +1816,7 @@ bool CvCaptureCAM_V4L::controlInfo(int property_id, __u32 &_v4l2id, cv::Range &r
|
|||||||
v4l2_queryctrl queryctrl = v4l2_queryctrl();
|
v4l2_queryctrl queryctrl = v4l2_queryctrl();
|
||||||
queryctrl.id = __u32(v4l2id);
|
queryctrl.id = __u32(v4l2id);
|
||||||
if (v4l2id == -1 || !tryIoctl(VIDIOC_QUERYCTRL, &queryctrl)) {
|
if (v4l2id == -1 || !tryIoctl(VIDIOC_QUERYCTRL, &queryctrl)) {
|
||||||
fprintf(stderr, "VIDEOIO ERROR: V4L2: property %s is not supported\n", capPropertyName(property_id).c_str());
|
CV_LOG_INFO(NULL, "VIDEOIO(V4L2:" << deviceName << "): property " << capPropertyName(property_id) << " is not supported");
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
_v4l2id = __u32(v4l2id);
|
_v4l2id = __u32(v4l2id);
|
||||||
@ -1742,7 +1847,9 @@ bool CvCaptureCAM_V4L::icvControl(__u32 v4l2id, int &value, bool isSet) const
|
|||||||
|
|
||||||
/* The driver may clamp the value or return ERANGE, ignored here */
|
/* The driver may clamp the value or return ERANGE, ignored here */
|
||||||
if (!tryIoctl(isSet ? VIDIOC_S_CTRL : VIDIOC_G_CTRL, &control)) {
|
if (!tryIoctl(isSet ? VIDIOC_S_CTRL : VIDIOC_G_CTRL, &control)) {
|
||||||
switch (errno) {
|
int err = errno;
|
||||||
|
CV_LOG_DEBUG(NULL, "VIDEOIO(V4L2:" << deviceName << "): failed " << (isSet ? "VIDIOC_S_CTRL" : "VIDIOC_G_CTRL") << ": errno=" << err << " (" << strerror(err) << ")");
|
||||||
|
switch (err) {
|
||||||
#ifndef NDEBUG
|
#ifndef NDEBUG
|
||||||
case EINVAL:
|
case EINVAL:
|
||||||
fprintf(stderr,
|
fprintf(stderr,
|
||||||
@ -1757,7 +1864,6 @@ bool CvCaptureCAM_V4L::icvControl(__u32 v4l2id, int &value, bool isSet) const
|
|||||||
break;
|
break;
|
||||||
#endif
|
#endif
|
||||||
default:
|
default:
|
||||||
perror(isSet ? "VIDIOC_S_CTRL" : "VIDIOC_G_CTRL");
|
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
return false;
|
return false;
|
||||||
@ -1791,7 +1897,7 @@ double CvCaptureCAM_V4L::getProperty(int property_id) const
|
|||||||
v4l2_streamparm sp = v4l2_streamparm();
|
v4l2_streamparm sp = v4l2_streamparm();
|
||||||
sp.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
|
sp.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
|
||||||
if (!tryIoctl(VIDIOC_G_PARM, &sp)) {
|
if (!tryIoctl(VIDIOC_G_PARM, &sp)) {
|
||||||
fprintf(stderr, "VIDEOIO ERROR: V4L: Unable to get camera FPS\n");
|
CV_LOG_WARNING(NULL, "VIDEOIO(V4L2:" << deviceName << "): Unable to get camera FPS");
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
return sp.parm.capture.timeperframe.denominator / (double)sp.parm.capture.timeperframe.numerator;
|
return sp.parm.capture.timeperframe.denominator / (double)sp.parm.capture.timeperframe.numerator;
|
||||||
@ -1881,7 +1987,7 @@ bool CvCaptureCAM_V4L::setProperty( int property_id, double _value )
|
|||||||
return true;
|
return true;
|
||||||
|
|
||||||
if (value > MAX_V4L_BUFFERS || value < 1) {
|
if (value > MAX_V4L_BUFFERS || value < 1) {
|
||||||
fprintf(stderr, "V4L: Bad buffer size %d, buffer size must be from 1 to %d\n", value, MAX_V4L_BUFFERS);
|
CV_LOG_WARNING(NULL, "VIDEOIO(V4L2:" << deviceName << "): Bad buffer size " << value << ", buffer size must be from 1 to " << MAX_V4L_BUFFERS);
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
bufferSize = value;
|
bufferSize = value;
|
||||||
@ -1937,13 +2043,15 @@ void CvCaptureCAM_V4L::releaseBuffers()
|
|||||||
|
|
||||||
bufferIndex = -1;
|
bufferIndex = -1;
|
||||||
FirstCapture = true;
|
FirstCapture = true;
|
||||||
if (!isOpened())
|
|
||||||
|
if (!v4l_buffersRequested)
|
||||||
return;
|
return;
|
||||||
|
v4l_buffersRequested = false;
|
||||||
|
|
||||||
for (unsigned int n_buffers = 0; n_buffers < MAX_V4L_BUFFERS; ++n_buffers) {
|
for (unsigned int n_buffers = 0; n_buffers < MAX_V4L_BUFFERS; ++n_buffers) {
|
||||||
if (buffers[n_buffers].start) {
|
if (buffers[n_buffers].start) {
|
||||||
if (-1 == munmap(buffers[n_buffers].start, buffers[n_buffers].length)) {
|
if (-1 == munmap(buffers[n_buffers].start, buffers[n_buffers].length)) {
|
||||||
perror("munmap");
|
CV_LOG_DEBUG(NULL, "VIDEOIO(V4L2:" << deviceName << "): failed munmap(): errno=" << errno << " (" << strerror(errno) << ")");
|
||||||
} else {
|
} else {
|
||||||
buffers[n_buffers].start = 0;
|
buffers[n_buffers].start = 0;
|
||||||
}
|
}
|
||||||
@ -1957,11 +2065,28 @@ void CvCaptureCAM_V4L::releaseBuffers()
|
|||||||
|
|
||||||
bool CvCaptureCAM_V4L::streaming(bool startStream)
|
bool CvCaptureCAM_V4L::streaming(bool startStream)
|
||||||
{
|
{
|
||||||
if (!isOpened())
|
if (startStream != v4l_streamStarted)
|
||||||
return !startStream;
|
{
|
||||||
|
if (!isOpened())
|
||||||
|
{
|
||||||
|
CV_Assert(v4l_streamStarted == false);
|
||||||
|
return !startStream;
|
||||||
|
}
|
||||||
|
|
||||||
type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
|
type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
|
||||||
return tryIoctl(startStream ? VIDIOC_STREAMON : VIDIOC_STREAMOFF, &type);
|
bool result = tryIoctl(startStream ? VIDIOC_STREAMON : VIDIOC_STREAMOFF, &type);
|
||||||
|
if (result)
|
||||||
|
{
|
||||||
|
v4l_streamStarted = startStream;
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
if (startStream)
|
||||||
|
{
|
||||||
|
CV_LOG_DEBUG(NULL, "VIDEOIO(V4L2:" << deviceName << "): failed VIDIOC_STREAMON: errno=" << errno << " (" << strerror(errno) << ")");
|
||||||
|
}
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
return startStream;
|
||||||
}
|
}
|
||||||
|
|
||||||
IplImage *CvCaptureCAM_V4L::retrieveFrame(int)
|
IplImage *CvCaptureCAM_V4L::retrieveFrame(int)
|
||||||
@ -1981,6 +2106,7 @@ IplImage *CvCaptureCAM_V4L::retrieveFrame(int)
|
|||||||
} else {
|
} else {
|
||||||
// for mjpeg streams the size might change in between, so we have to change the header
|
// for mjpeg streams the size might change in between, so we have to change the header
|
||||||
// We didn't allocate memory when not convert_rgb, but we have to recreate the header
|
// We didn't allocate memory when not convert_rgb, but we have to recreate the header
|
||||||
|
CV_LOG_DEBUG(NULL, "VIDEOIO(V4L2:" << deviceName << "): buffer input size=" << currentBuffer.buffer.bytesused);
|
||||||
if (frame.imageSize != (int)currentBuffer.buffer.bytesused)
|
if (frame.imageSize != (int)currentBuffer.buffer.bytesused)
|
||||||
v4l2_create_frame();
|
v4l2_create_frame();
|
||||||
|
|
||||||
@ -1990,7 +2116,9 @@ IplImage *CvCaptureCAM_V4L::retrieveFrame(int)
|
|||||||
}
|
}
|
||||||
//Revert buffer to the queue
|
//Revert buffer to the queue
|
||||||
if (!tryIoctl(VIDIOC_QBUF, &buffers[bufferIndex].buffer))
|
if (!tryIoctl(VIDIOC_QBUF, &buffers[bufferIndex].buffer))
|
||||||
perror("VIDIOC_QBUF");
|
{
|
||||||
|
CV_LOG_DEBUG(NULL, "VIDEOIO(V4L2:" << deviceName << "): failed VIDIOC_QBUF: errno=" << errno << " (" << strerror(errno) << ")");
|
||||||
|
}
|
||||||
|
|
||||||
bufferIndex = -1;
|
bufferIndex = -1;
|
||||||
return &frame;
|
return &frame;
|
||||||
|
@ -51,6 +51,12 @@
|
|||||||
#endif
|
#endif
|
||||||
|
|
||||||
#include <opencv2/core/utils/configuration.private.hpp>
|
#include <opencv2/core/utils/configuration.private.hpp>
|
||||||
|
#include <opencv2/core/utils/logger.defines.hpp>
|
||||||
|
#ifdef NDEBUG
|
||||||
|
#define CV_LOG_STRIP_LEVEL CV_LOG_LEVEL_DEBUG + 1
|
||||||
|
#else
|
||||||
|
#define CV_LOG_STRIP_LEVEL CV_LOG_LEVEL_VERBOSE + 1
|
||||||
|
#endif
|
||||||
#include <opencv2/core/utils/logger.hpp>
|
#include <opencv2/core/utils/logger.hpp>
|
||||||
|
|
||||||
#include "opencv2/imgcodecs.hpp"
|
#include "opencv2/imgcodecs.hpp"
|
||||||
|
@ -64,7 +64,7 @@ def createSSDGraph(modelPath, configPath, outputPath):
|
|||||||
# Nodes that should be kept.
|
# Nodes that should be kept.
|
||||||
keepOps = ['Conv2D', 'BiasAdd', 'Add', 'Relu', 'Relu6', 'Placeholder', 'FusedBatchNorm',
|
keepOps = ['Conv2D', 'BiasAdd', 'Add', 'Relu', 'Relu6', 'Placeholder', 'FusedBatchNorm',
|
||||||
'DepthwiseConv2dNative', 'ConcatV2', 'Mul', 'MaxPool', 'AvgPool', 'Identity',
|
'DepthwiseConv2dNative', 'ConcatV2', 'Mul', 'MaxPool', 'AvgPool', 'Identity',
|
||||||
'Sub', 'ResizeNearestNeighbor', 'Pad']
|
'Sub', 'ResizeNearestNeighbor', 'Pad', 'FusedBatchNormV3']
|
||||||
|
|
||||||
# Node with which prefixes should be removed
|
# Node with which prefixes should be removed
|
||||||
prefixesToRemove = ('MultipleGridAnchorGenerator/', 'Concatenate/', 'Postprocessor/', 'Preprocessor/map')
|
prefixesToRemove = ('MultipleGridAnchorGenerator/', 'Concatenate/', 'Postprocessor/', 'Preprocessor/map')
|
||||||
|
Loading…
Reference in New Issue
Block a user