mirror of
https://github.com/opencv/opencv.git
synced 2025-01-18 06:03:15 +08:00
Merge remote-tracking branch 'upstream/3.4' into merge-3.4
This commit is contained in:
commit
0d7f770996
@ -150,7 +150,7 @@ We observe that @ref cv::Mat::zeros returns a Matlab-style zero initializer base
|
||||
|
||||
Notice the following (**C++ code only**):
|
||||
- To access each pixel in the images we are using this syntax: *image.at\<Vec3b\>(y,x)[c]*
|
||||
where *y* is the row, *x* is the column and *c* is R, G or B (0, 1 or 2).
|
||||
where *y* is the row, *x* is the column and *c* is B, G or R (0, 1 or 2).
|
||||
- Since the operation \f$\alpha \cdot p(i,j) + \beta\f$ can give values out of range or not
|
||||
integers (if \f$\alpha\f$ is float), we use cv::saturate_cast to make sure the
|
||||
values are valid.
|
||||
@ -220,12 +220,12 @@ gamma correction.
|
||||
### Brightness and contrast adjustments
|
||||
|
||||
Increasing (/ decreasing) the \f$\beta\f$ value will add (/ subtract) a constant value to every pixel. Pixel values outside of the [0 ; 255]
|
||||
range will be saturated (i.e. a pixel value higher (/ lesser) than 255 (/ 0) will be clamp to 255 (/ 0)).
|
||||
range will be saturated (i.e. a pixel value higher (/ lesser) than 255 (/ 0) will be clamped to 255 (/ 0)).
|
||||
|
||||
![In light gray, histogram of the original image, in dark gray when brightness = 80 in Gimp](images/Basic_Linear_Transform_Tutorial_hist_beta.png)
|
||||
|
||||
The histogram represents for each color level the number of pixels with that color level. A dark image will have many pixels with
|
||||
low color value and thus the histogram will present a peak in his left part. When adding a constant bias, the histogram is shifted to the
|
||||
low color value and thus the histogram will present a peak in its left part. When adding a constant bias, the histogram is shifted to the
|
||||
right as we have added a constant bias to all the pixels.
|
||||
|
||||
The \f$\alpha\f$ parameter will modify how the levels spread. If \f$ \alpha < 1 \f$, the color levels will be compressed and the result
|
||||
|
@ -10,7 +10,7 @@ Goal
|
||||
We'll seek answers for the following questions:
|
||||
|
||||
- How to go through each and every pixel of an image?
|
||||
- How is OpenCV matrix values stored?
|
||||
- How are OpenCV matrix values stored?
|
||||
- How to measure the performance of our algorithm?
|
||||
- What are lookup tables and why use them?
|
||||
|
||||
@ -45,13 +45,13 @@ operation. In case of the *uchar* system this is 256 to be exact.
|
||||
Therefore, for larger images it would be wise to calculate all possible values beforehand and during
|
||||
the assignment just make the assignment, by using a lookup table. Lookup tables are simple arrays
|
||||
(having one or more dimensions) that for a given input value variation holds the final output value.
|
||||
Its strength lies that we do not need to make the calculation, we just need to read the result.
|
||||
Its strength is that we do not need to make the calculation, we just need to read the result.
|
||||
|
||||
Our test case program (and the sample presented here) will do the following: read in a console line
|
||||
argument image (that may be either color or gray scale - console line argument too) and apply the
|
||||
reduction with the given console line argument integer value. In OpenCV, at the moment there are
|
||||
Our test case program (and the code sample below) will do the following: read in an image passed
|
||||
as a command line argument (it may be either color or grayscale) and apply the reduction
|
||||
with the given command line argument integer value. In OpenCV, at the moment there are
|
||||
three major ways of going through an image pixel by pixel. To make things a little more interesting
|
||||
will make the scanning for each image using all of these methods, and print out how long it took.
|
||||
we'll make the scanning of the image using each of these methods, and print out how long it took.
|
||||
|
||||
You can download the full source code [here
|
||||
](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/core/how_to_scan_images/how_to_scan_images.cpp) or look it up in
|
||||
@ -59,7 +59,7 @@ the samples directory of OpenCV at the cpp tutorial code for the core section. I
|
||||
@code{.bash}
|
||||
how_to_scan_images imageName.jpg intValueToReduce [G]
|
||||
@endcode
|
||||
The final argument is optional. If given the image will be loaded in gray scale format, otherwise
|
||||
The final argument is optional. If given the image will be loaded in grayscale format, otherwise
|
||||
the BGR color space is used. The first thing is to calculate the lookup table.
|
||||
|
||||
@snippet how_to_scan_images.cpp dividewith
|
||||
@ -71,8 +71,8 @@ No OpenCV specific stuff here.
|
||||
Another issue is how do we measure time? Well OpenCV offers two simple functions to achieve this
|
||||
cv::getTickCount() and cv::getTickFrequency() . The first returns the number of ticks of
|
||||
your systems CPU from a certain event (like since you booted your system). The second returns how
|
||||
many times your CPU emits a tick during a second. So to measure in seconds the number of time
|
||||
elapsed between two operations is easy as:
|
||||
many times your CPU emits a tick during a second. So, measuring amount of time elapsed between
|
||||
two operations is as easy as:
|
||||
@code{.cpp}
|
||||
double t = (double)getTickCount();
|
||||
// do something ...
|
||||
@ -85,8 +85,8 @@ How is the image matrix stored in memory?
|
||||
-----------------------------------------
|
||||
|
||||
As you could already read in my @ref tutorial_mat_the_basic_image_container tutorial the size of the matrix
|
||||
depends on the color system used. More accurately, it depends from the number of channels used. In
|
||||
case of a gray scale image we have something like:
|
||||
depends on the color system used. More accurately, it depends on the number of channels used. In
|
||||
case of a grayscale image we have something like:
|
||||
|
||||
![](tutorial_how_matrix_stored_1.png)
|
||||
|
||||
@ -117,12 +117,12 @@ three channels so we need to pass through three times more items in each row.
|
||||
There's another way of this. The *data* data member of a *Mat* object returns the pointer to the
|
||||
first row, first column. If this pointer is null you have no valid input in that object. Checking
|
||||
this is the simplest method to check if your image loading was a success. In case the storage is
|
||||
continuous we can use this to go through the whole data pointer. In case of a gray scale image this
|
||||
continuous we can use this to go through the whole data pointer. In case of a grayscale image this
|
||||
would look like:
|
||||
@code{.cpp}
|
||||
uchar* p = I.data;
|
||||
|
||||
for( unsigned int i =0; i < ncol*nrows; ++i)
|
||||
for( unsigned int i = 0; i < ncol*nrows; ++i)
|
||||
*p++ = table[*p];
|
||||
@endcode
|
||||
You would get the same result. However, this code is a lot harder to read later on. It gets even
|
||||
@ -135,7 +135,7 @@ The iterator (safe) method
|
||||
|
||||
In case of the efficient way making sure that you pass through the right amount of *uchar* fields
|
||||
and to skip the gaps that may occur between the rows was your responsibility. The iterator method is
|
||||
considered a safer way as it takes over these tasks from the user. All you need to do is ask the
|
||||
considered a safer way as it takes over these tasks from the user. All you need to do is to ask the
|
||||
begin and the end of the image matrix and then just increase the begin iterator until you reach the
|
||||
end. To acquire the value *pointed* by the iterator use the \* operator (add it before it).
|
||||
|
||||
@ -152,17 +152,17 @@ On-the-fly address calculation with reference returning
|
||||
|
||||
The final method isn't recommended for scanning. It was made to acquire or modify somehow random
|
||||
elements in the image. Its basic usage is to specify the row and column number of the item you want
|
||||
to access. During our earlier scanning methods you could already observe that is important through
|
||||
to access. During our earlier scanning methods you could already notice that it is important through
|
||||
what type we are looking at the image. It's no different here as you need to manually specify what
|
||||
type to use at the automatic lookup. You can observe this in case of the gray scale images for the
|
||||
type to use at the automatic lookup. You can observe this in case of the grayscale images for the
|
||||
following source code (the usage of the + cv::Mat::at() function):
|
||||
|
||||
@snippet how_to_scan_images.cpp scan-random
|
||||
|
||||
The functions takes your input type and coordinates and calculates on the fly the address of the
|
||||
The function takes your input type and coordinates and calculates the address of the
|
||||
queried item. Then returns a reference to that. This may be a constant when you *get* the value and
|
||||
non-constant when you *set* the value. As a safety step in **debug mode only**\* there is performed
|
||||
a check that your input coordinates are valid and does exist. If this isn't the case you'll get a
|
||||
non-constant when you *set* the value. As a safety step in **debug mode only**\* there is a check
|
||||
performed that your input coordinates are valid and do exist. If this isn't the case you'll get a
|
||||
nice output message of this on the standard error output stream. Compared to the efficient way in
|
||||
release mode the only difference in using this is that for every element of the image you'll get a
|
||||
new row pointer for what we use the C operator[] to acquire the column element.
|
||||
@ -173,7 +173,7 @@ OpenCV has a cv::Mat_ data type. It's the same as Mat with the extra need that a
|
||||
you need to specify the data type through what to look at the data matrix, however in return you can
|
||||
use the operator() for fast access of items. To make things even better this is easily convertible
|
||||
from and to the usual cv::Mat data type. A sample usage of this you can see in case of the
|
||||
color images of the upper function. Nevertheless, it's important to note that the same operation
|
||||
color images of the function above. Nevertheless, it's important to note that the same operation
|
||||
(with the same runtime speed) could have been done with the cv::Mat::at function. It's just a less
|
||||
to write for the lazy programmer trick.
|
||||
|
||||
@ -195,7 +195,7 @@ Finally call the function (I is our input image and J the output one):
|
||||
Performance Difference
|
||||
----------------------
|
||||
|
||||
For the best result compile the program and run it on your own speed. To make the differences more
|
||||
For the best result compile the program and run it yourself. To make the differences more
|
||||
clear, I've used a quite large (2560 X 1600) image. The performance presented here are for
|
||||
color images. For a more accurate value I've averaged the value I got from the call of the function
|
||||
for hundred times.
|
||||
|
@ -4,7 +4,7 @@ Mask operations on matrices {#tutorial_mat_mask_operations}
|
||||
@prev_tutorial{tutorial_how_to_scan_images}
|
||||
@next_tutorial{tutorial_mat_operations}
|
||||
|
||||
Mask operations on matrices are quite simple. The idea is that we recalculate each pixels value in
|
||||
Mask operations on matrices are quite simple. The idea is that we recalculate each pixel's value in
|
||||
an image according to a mask matrix (also known as kernel). This mask holds values that will adjust
|
||||
how much influence neighboring pixels (and the current pixel) have on the new pixel value. From a
|
||||
mathematical point of view we make a weighted average, with our specified values.
|
||||
@ -12,7 +12,7 @@ mathematical point of view we make a weighted average, with our specified values
|
||||
Our test case
|
||||
-------------
|
||||
|
||||
Let us consider the issue of an image contrast enhancement method. Basically we want to apply for
|
||||
Let's consider the issue of an image contrast enhancement method. Basically we want to apply for
|
||||
every pixel of the image the following formula:
|
||||
|
||||
\f[I(i,j) = 5*I(i,j) - [ I(i-1,j) + I(i+1,j) + I(i,j-1) + I(i,j+1)]\f]\f[\iff I(i,j)*M, \text{where }
|
||||
@ -144,7 +144,7 @@ Then we apply the sum and put the new value in the Result matrix.
|
||||
The filter2D function
|
||||
---------------------
|
||||
|
||||
Applying such filters are so common in image processing that in OpenCV there exist a function that
|
||||
Applying such filters are so common in image processing that in OpenCV there is a function that
|
||||
will take care of applying the mask (also called a kernel in some places). For this you first need
|
||||
to define an object that holds the mask:
|
||||
|
||||
|
@ -61,7 +61,7 @@ The last thing we want to do is further decrease the speed of your program by ma
|
||||
copies of potentially *large* images.
|
||||
|
||||
To tackle this issue OpenCV uses a reference counting system. The idea is that each *Mat* object has
|
||||
its own header, however the matrix may be shared between two instance of them by having their matrix
|
||||
its own header, however a matrix may be shared between two *Mat* objects by having their matrix
|
||||
pointers point to the same address. Moreover, the copy operators **will only copy the headers** and
|
||||
the pointer to the large matrix, not the data itself.
|
||||
|
||||
@ -74,32 +74,32 @@ Mat B(A); // Use the copy constructor
|
||||
C = A; // Assignment operator
|
||||
@endcode
|
||||
|
||||
All the above objects, in the end, point to the same single data matrix. Their headers are
|
||||
different, however, and making a modification using any of them will affect all the other ones as
|
||||
well. In practice the different objects just provide different access method to the same underlying
|
||||
data. Nevertheless, their header parts are different. The real interesting part is that you can
|
||||
create headers which refer to only a subsection of the full data. For example, to create a region of
|
||||
interest (*ROI*) in an image you just create a new header with the new boundaries:
|
||||
All the above objects, in the end, point to the same single data matrix and making a modification
|
||||
using any of them will affect all the other ones as well. In practice the different objects just
|
||||
provide different access methods to the same underlying data. Nevertheless, their header parts are
|
||||
different. The real interesting part is that you can create headers which refer to only a subsection
|
||||
of the full data. For example, to create a region of interest (*ROI*) in an image you just create
|
||||
a new header with the new boundaries:
|
||||
@code{.cpp}
|
||||
Mat D (A, Rect(10, 10, 100, 100) ); // using a rectangle
|
||||
Mat E = A(Range::all(), Range(1,3)); // using row and column boundaries
|
||||
@endcode
|
||||
Now you may ask if the matrix itself may belong to multiple *Mat* objects who takes responsibility
|
||||
Now you may ask -- if the matrix itself may belong to multiple *Mat* objects who takes responsibility
|
||||
for cleaning it up when it's no longer needed. The short answer is: the last object that used it.
|
||||
This is handled by using a reference counting mechanism. Whenever somebody copies a header of a
|
||||
*Mat* object, a counter is increased for the matrix. Whenever a header is cleaned this counter is
|
||||
decreased. When the counter reaches zero the matrix too is freed. Sometimes you will want to copy
|
||||
the matrix itself too, so OpenCV provides the @ref cv::Mat::clone() and @ref cv::Mat::copyTo() functions.
|
||||
*Mat* object, a counter is increased for the matrix. Whenever a header is cleaned, this counter
|
||||
is decreased. When the counter reaches zero the matrix is freed. Sometimes you will want to copy
|
||||
the matrix itself too, so OpenCV provides @ref cv::Mat::clone() and @ref cv::Mat::copyTo() functions.
|
||||
@code{.cpp}
|
||||
Mat F = A.clone();
|
||||
Mat G;
|
||||
A.copyTo(G);
|
||||
@endcode
|
||||
Now modifying *F* or *G* will not affect the matrix pointed by the *Mat* header. What you need to
|
||||
Now modifying *F* or *G* will not affect the matrix pointed by the *A*'s header. What you need to
|
||||
remember from all this is that:
|
||||
|
||||
- Output image allocation for OpenCV functions is automatic (unless specified otherwise).
|
||||
- You do not need to think about memory management with OpenCVs C++ interface.
|
||||
- You do not need to think about memory management with OpenCV's C++ interface.
|
||||
- The assignment operator and the copy constructor only copies the header.
|
||||
- The underlying matrix of an image may be copied using the @ref cv::Mat::clone() and @ref cv::Mat::copyTo()
|
||||
functions.
|
||||
@ -109,7 +109,7 @@ Storing methods
|
||||
|
||||
This is about how you store the pixel values. You can select the color space and the data type used.
|
||||
The color space refers to how we combine color components in order to code a given color. The
|
||||
simplest one is the gray scale where the colors at our disposal are black and white. The combination
|
||||
simplest one is the grayscale where the colors at our disposal are black and white. The combination
|
||||
of these allows us to create many shades of gray.
|
||||
|
||||
For *colorful* ways we have a lot more methods to choose from. Each of them breaks it down to three
|
||||
@ -121,15 +121,15 @@ added.
|
||||
There are, however, many other color systems each with their own advantages:
|
||||
|
||||
- RGB is the most common as our eyes use something similar, however keep in mind that OpenCV standard display
|
||||
system composes colors using the BGR color space (a switch of the red and blue channel).
|
||||
system composes colors using the BGR color space (red and blue channels are swapped places).
|
||||
- The HSV and HLS decompose colors into their hue, saturation and value/luminance components,
|
||||
which is a more natural way for us to describe colors. You might, for example, dismiss the last
|
||||
component, making your algorithm less sensible to the light conditions of the input image.
|
||||
- YCrCb is used by the popular JPEG image format.
|
||||
- CIE L\*a\*b\* is a perceptually uniform color space, which comes handy if you need to measure
|
||||
- CIE L\*a\*b\* is a perceptually uniform color space, which comes in handy if you need to measure
|
||||
the *distance* of a given color to another color.
|
||||
|
||||
Each of the building components has their own valid domains. This leads to the data type used. How
|
||||
Each of the building components has its own valid domains. This leads to the data type used. How
|
||||
we store a component defines the control we have over its domain. The smallest data type possible is
|
||||
*char*, which means one byte or 8 bits. This may be unsigned (so can store values from 0 to 255) or
|
||||
signed (values from -127 to +127). Although in case of three components this already gives 16
|
||||
@ -165,8 +165,8 @@ object in multiple ways:
|
||||
CV_[The number of bits per item][Signed or Unsigned][Type Prefix]C[The channel number]
|
||||
@endcode
|
||||
For instance, *CV_8UC3* means we use unsigned char types that are 8 bit long and each pixel has
|
||||
three of these to form the three channels. This are predefined for up to four channel numbers. The
|
||||
@ref cv::Scalar is four element short vector. Specify this and you can initialize all matrix
|
||||
three of these to form the three channels. There are types predefined for up to four channels. The
|
||||
@ref cv::Scalar is four element short vector. Specify it and you can initialize all matrix
|
||||
points with a custom value. If you need more you can create the type with the upper macro, setting
|
||||
the channel number in parenthesis as you can see below.
|
||||
|
||||
@ -210,7 +210,7 @@ object in multiple ways:
|
||||
|
||||
@note
|
||||
You can fill out a matrix with random values using the @ref cv::randu() function. You need to
|
||||
give the lower and upper value for the random values:
|
||||
give a lower and upper limit for the random values:
|
||||
@snippet mat_the_basic_image_container.cpp random
|
||||
|
||||
|
||||
|
@ -1715,8 +1715,8 @@ CV_EXPORTS_W double stereoCalibrate( InputArrayOfArrays objectPoints,
|
||||
@param cameraMatrix2 Second camera matrix.
|
||||
@param distCoeffs2 Second camera distortion parameters.
|
||||
@param imageSize Size of the image used for stereo calibration.
|
||||
@param R Rotation matrix between the coordinate systems of the first and the second cameras.
|
||||
@param T Translation vector between coordinate systems of the cameras.
|
||||
@param R Rotation matrix from the coordinate system of the first camera to the second.
|
||||
@param T Translation vector from the coordinate system of the first camera to the second.
|
||||
@param R1 Output 3x3 rectification transform (rotation matrix) for the first camera.
|
||||
@param R2 Output 3x3 rectification transform (rotation matrix) for the second camera.
|
||||
@param P1 Output 3x4 projection matrix in the new (rectified) coordinate systems for the first
|
||||
@ -2378,8 +2378,11 @@ CV_EXPORTS_W void validateDisparity( InputOutputArray disparity, InputArray cost
|
||||
/** @brief Reprojects a disparity image to 3D space.
|
||||
|
||||
@param disparity Input single-channel 8-bit unsigned, 16-bit signed, 32-bit signed or 32-bit
|
||||
floating-point disparity image. If 16-bit signed format is used, the values are assumed to have no
|
||||
fractional bits.
|
||||
floating-point disparity image.
|
||||
The values of 8-bit / 16-bit signed formats are assumed to have no fractional bits.
|
||||
If the disparity is 16-bit signed format as computed by
|
||||
StereoBM/StereoSGBM/StereoBinaryBM/StereoBinarySGBM and may be other algorithms,
|
||||
it should be divided by 16 (and scaled to float) before being used here.
|
||||
@param _3dImage Output 3-channel floating-point image of the same size as disparity . Each
|
||||
element of _3dImage(x,y) contains 3D coordinates of the point (x,y) computed from the disparity
|
||||
map.
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -74,8 +74,6 @@ namespace cv
|
||||
the floating-point value is first rounded to the nearest integer and then clipped if needed (when
|
||||
the target type is 8- or 16-bit).
|
||||
|
||||
This operation is used in the simplest or most complex image processing functions in OpenCV.
|
||||
|
||||
@param v Function parameter.
|
||||
@sa add, subtract, multiply, divide, Mat::convertTo
|
||||
*/
|
||||
|
@ -563,25 +563,206 @@ Mat& Mat::setTo(InputArray _value, InputArray _mask)
|
||||
return *this;
|
||||
}
|
||||
|
||||
#if CV_SIMD128
|
||||
template<typename V> CV_ALWAYS_INLINE void flipHoriz_single( const uchar* src, size_t sstep, uchar* dst, size_t dstep, Size size, size_t esz )
|
||||
{
|
||||
typedef typename V::lane_type T;
|
||||
int end = (int)(size.width*esz);
|
||||
int width = (end + 1)/2;
|
||||
int width_1 = width & -v_uint8x16::nlanes;
|
||||
int i, j;
|
||||
|
||||
for( ; size.height--; src += sstep, dst += dstep )
|
||||
{
|
||||
for( i = 0, j = end; i < width_1; i += v_uint8x16::nlanes, j -= v_uint8x16::nlanes )
|
||||
{
|
||||
V t0, t1;
|
||||
|
||||
t0 = v_load((T*)((uchar*)src + i));
|
||||
t1 = v_load((T*)((uchar*)src + j - v_uint8x16::nlanes));
|
||||
t0 = v_reverse(t0);
|
||||
t1 = v_reverse(t1);
|
||||
v_store((T*)(dst + j - v_uint8x16::nlanes), t0);
|
||||
v_store((T*)(dst + i), t1);
|
||||
}
|
||||
if (((size_t)src|(size_t)dst) % sizeof(T) == 0)
|
||||
{
|
||||
for ( ; i < width; i += sizeof(T), j -= sizeof(T) )
|
||||
{
|
||||
T t0, t1;
|
||||
|
||||
t0 = *((T*)((uchar*)src + i));
|
||||
t1 = *((T*)((uchar*)src + j - sizeof(T)));
|
||||
*((T*)(dst + j - sizeof(T))) = t0;
|
||||
*((T*)(dst + i)) = t1;
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
for ( ; i < width; i += sizeof(T), j -= sizeof(T) )
|
||||
{
|
||||
for (int k = 0; k < (int)sizeof(T); k++)
|
||||
{
|
||||
uchar t0, t1;
|
||||
|
||||
t0 = *((uchar*)src + i + k);
|
||||
t1 = *((uchar*)src + j + k - sizeof(T));
|
||||
*(dst + j + k - sizeof(T)) = t0;
|
||||
*(dst + i + k) = t1;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
template<typename T1, typename T2> CV_ALWAYS_INLINE void flipHoriz_double( const uchar* src, size_t sstep, uchar* dst, size_t dstep, Size size, size_t esz )
|
||||
{
|
||||
int end = (int)(size.width*esz);
|
||||
int width = (end + 1)/2;
|
||||
|
||||
for( ; size.height--; src += sstep, dst += dstep )
|
||||
{
|
||||
for ( int i = 0, j = end; i < width; i += sizeof(T1) + sizeof(T2), j -= sizeof(T1) + sizeof(T2) )
|
||||
{
|
||||
T1 t0, t1;
|
||||
T2 t2, t3;
|
||||
|
||||
t0 = *((T1*)((uchar*)src + i));
|
||||
t2 = *((T2*)((uchar*)src + i + sizeof(T1)));
|
||||
t1 = *((T1*)((uchar*)src + j - sizeof(T1) - sizeof(T2)));
|
||||
t3 = *((T2*)((uchar*)src + j - sizeof(T2)));
|
||||
*((T1*)(dst + j - sizeof(T1) - sizeof(T2))) = t0;
|
||||
*((T2*)(dst + j - sizeof(T2))) = t2;
|
||||
*((T1*)(dst + i)) = t1;
|
||||
*((T2*)(dst + i + sizeof(T1))) = t3;
|
||||
}
|
||||
}
|
||||
}
|
||||
#endif
|
||||
|
||||
static void
|
||||
flipHoriz( const uchar* src, size_t sstep, uchar* dst, size_t dstep, Size size, size_t esz )
|
||||
{
|
||||
int i, j, limit = (int)(((size.width + 1)/2)*esz);
|
||||
AutoBuffer<int> _tab(size.width*esz);
|
||||
int* tab = _tab.data();
|
||||
|
||||
for( i = 0; i < size.width; i++ )
|
||||
for( size_t k = 0; k < esz; k++ )
|
||||
tab[i*esz + k] = (int)((size.width - i - 1)*esz + k);
|
||||
|
||||
for( ; size.height--; src += sstep, dst += dstep )
|
||||
#if CV_SIMD
|
||||
if (esz == 2 * v_uint8x16::nlanes)
|
||||
{
|
||||
for( i = 0; i < limit; i++ )
|
||||
int end = (int)(size.width*esz);
|
||||
int width = end/2;
|
||||
|
||||
for( ; size.height--; src += sstep, dst += dstep )
|
||||
{
|
||||
j = tab[i];
|
||||
uchar t0 = src[i], t1 = src[j];
|
||||
dst[i] = t1; dst[j] = t0;
|
||||
for( int i = 0, j = end - 2 * v_uint8x16::nlanes; i < width; i += 2 * v_uint8x16::nlanes, j -= 2 * v_uint8x16::nlanes )
|
||||
{
|
||||
#if CV_SIMD256
|
||||
v_uint8x32 t0, t1;
|
||||
|
||||
t0 = v256_load((uchar*)src + i);
|
||||
t1 = v256_load((uchar*)src + j);
|
||||
v_store(dst + j, t0);
|
||||
v_store(dst + i, t1);
|
||||
#else
|
||||
v_uint8x16 t0, t1, t2, t3;
|
||||
|
||||
t0 = v_load((uchar*)src + i);
|
||||
t1 = v_load((uchar*)src + i + v_uint8x16::nlanes);
|
||||
t2 = v_load((uchar*)src + j);
|
||||
t3 = v_load((uchar*)src + j + v_uint8x16::nlanes);
|
||||
v_store(dst + j, t0);
|
||||
v_store(dst + j + v_uint8x16::nlanes, t1);
|
||||
v_store(dst + i, t2);
|
||||
v_store(dst + i + v_uint8x16::nlanes, t3);
|
||||
#endif
|
||||
}
|
||||
}
|
||||
}
|
||||
else if (esz == v_uint8x16::nlanes)
|
||||
{
|
||||
int end = (int)(size.width*esz);
|
||||
int width = end/2;
|
||||
|
||||
for( ; size.height--; src += sstep, dst += dstep )
|
||||
{
|
||||
for( int i = 0, j = end - v_uint8x16::nlanes; i < width; i += v_uint8x16::nlanes, j -= v_uint8x16::nlanes )
|
||||
{
|
||||
v_uint8x16 t0, t1;
|
||||
|
||||
t0 = v_load((uchar*)src + i);
|
||||
t1 = v_load((uchar*)src + j);
|
||||
v_store(dst + j, t0);
|
||||
v_store(dst + i, t1);
|
||||
}
|
||||
}
|
||||
}
|
||||
else if (esz == 8)
|
||||
{
|
||||
flipHoriz_single<v_uint64x2>(src, sstep, dst, dstep, size, esz);
|
||||
}
|
||||
else if (esz == 4)
|
||||
{
|
||||
flipHoriz_single<v_uint32x4>(src, sstep, dst, dstep, size, esz);
|
||||
}
|
||||
else if (esz == 2)
|
||||
{
|
||||
flipHoriz_single<v_uint16x8>(src, sstep, dst, dstep, size, esz);
|
||||
}
|
||||
else if (esz == 1)
|
||||
{
|
||||
flipHoriz_single<v_uint8x16>(src, sstep, dst, dstep, size, esz);
|
||||
}
|
||||
else if (esz == 24)
|
||||
{
|
||||
int end = (int)(size.width*esz);
|
||||
int width = (end + 1)/2;
|
||||
|
||||
for( ; size.height--; src += sstep, dst += dstep )
|
||||
{
|
||||
for ( int i = 0, j = end; i < width; i += v_uint8x16::nlanes + sizeof(uint64_t), j -= v_uint8x16::nlanes + sizeof(uint64_t) )
|
||||
{
|
||||
v_uint8x16 t0, t1;
|
||||
uint64_t t2, t3;
|
||||
|
||||
t0 = v_load((uchar*)src + i);
|
||||
t2 = *((uint64_t*)((uchar*)src + i + v_uint8x16::nlanes));
|
||||
t1 = v_load((uchar*)src + j - v_uint8x16::nlanes - sizeof(uint64_t));
|
||||
t3 = *((uint64_t*)((uchar*)src + j - sizeof(uint64_t)));
|
||||
v_store(dst + j - v_uint8x16::nlanes - sizeof(uint64_t), t0);
|
||||
*((uint64_t*)(dst + j - sizeof(uint64_t))) = t2;
|
||||
v_store(dst + i, t1);
|
||||
*((uint64_t*)(dst + i + v_uint8x16::nlanes)) = t3;
|
||||
}
|
||||
}
|
||||
}
|
||||
else if (esz == 12)
|
||||
{
|
||||
flipHoriz_double<uint64_t,uint>(src, sstep, dst, dstep, size, esz);
|
||||
}
|
||||
else if (esz == 6)
|
||||
{
|
||||
flipHoriz_double<uint,ushort>(src, sstep, dst, dstep, size, esz);
|
||||
}
|
||||
else if (esz == 3)
|
||||
{
|
||||
flipHoriz_double<ushort,uchar>(src, sstep, dst, dstep, size, esz);
|
||||
}
|
||||
else
|
||||
#endif
|
||||
{
|
||||
int i, j, limit = (int)(((size.width + 1)/2)*esz);
|
||||
AutoBuffer<int> _tab(size.width*esz);
|
||||
int* tab = _tab.data();
|
||||
|
||||
for( i = 0; i < size.width; i++ )
|
||||
for( size_t k = 0; k < esz; k++ )
|
||||
tab[i*esz + k] = (int)((size.width - i - 1)*esz + k);
|
||||
|
||||
for( ; size.height--; src += sstep, dst += dstep )
|
||||
{
|
||||
for( i = 0; i < limit; i++ )
|
||||
{
|
||||
j = tab[i];
|
||||
uchar t0 = src[i], t1 = src[j];
|
||||
dst[i] = t1; dst[j] = t0;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -597,6 +778,16 @@ flipVert( const uchar* src0, size_t sstep, uchar* dst0, size_t dstep, Size size,
|
||||
dst0 += dstep, dst1 -= dstep )
|
||||
{
|
||||
int i = 0;
|
||||
#if CV_SIMD
|
||||
for( ; i <= size.width - (v_int32::nlanes * 4); i += v_int32::nlanes * 4 )
|
||||
{
|
||||
v_int32 t0 = vx_load((int*)(src0 + i));
|
||||
v_int32 t1 = vx_load((int*)(src1 + i));
|
||||
vx_store((int*)(dst0 + i), t1);
|
||||
vx_store((int*)(dst1 + i), t0);
|
||||
}
|
||||
#endif
|
||||
|
||||
if( ((size_t)src0|(size_t)dst0|(size_t)src1|(size_t)dst1) % sizeof(int) == 0 )
|
||||
{
|
||||
for( ; i <= size.width - 16; i += 16 )
|
||||
|
@ -131,6 +131,10 @@ void* allocSingletonNewBuffer(size_t size) { return malloc(size); }
|
||||
#if (_WIN32_WINNT >= 0x0602)
|
||||
#include <synchapi.h>
|
||||
#endif
|
||||
#if ((_WIN32_WINNT >= 0x0600) && !defined(CV_DISABLE_FLS)) || defined(CV_FORCE_FLS)
|
||||
#include <fibersapi.h>
|
||||
#define CV_USE_FLS
|
||||
#endif
|
||||
#undef small
|
||||
#undef min
|
||||
#undef max
|
||||
@ -142,7 +146,7 @@ void* allocSingletonNewBuffer(size_t size) { return malloc(size); }
|
||||
#ifndef __cplusplus_winrt
|
||||
#include <windows.storage.h>
|
||||
#pragma comment(lib, "runtimeobject.lib")
|
||||
#endif
|
||||
#endif // WINRT
|
||||
|
||||
std::wstring GetTempPathWinRT()
|
||||
{
|
||||
@ -1359,24 +1363,43 @@ void TlsAbstraction::SetData(void *pData)
|
||||
tlsData = pData;
|
||||
}
|
||||
#else //WINRT
|
||||
#ifdef CV_USE_FLS
|
||||
static void NTAPI opencv_fls_destructor(void* pData);
|
||||
#endif // CV_USE_FLS
|
||||
TlsAbstraction::TlsAbstraction()
|
||||
{
|
||||
#ifndef CV_USE_FLS
|
||||
tlsKey = TlsAlloc();
|
||||
#else // CV_USE_FLS
|
||||
tlsKey = FlsAlloc(opencv_fls_destructor);
|
||||
#endif // CV_USE_FLS
|
||||
CV_Assert(tlsKey != TLS_OUT_OF_INDEXES);
|
||||
}
|
||||
TlsAbstraction::~TlsAbstraction()
|
||||
{
|
||||
#ifndef CV_USE_FLS
|
||||
TlsFree(tlsKey);
|
||||
#else // CV_USE_FLS
|
||||
FlsFree(tlsKey);
|
||||
#endif // CV_USE_FLS
|
||||
}
|
||||
void* TlsAbstraction::GetData() const
|
||||
{
|
||||
#ifndef CV_USE_FLS
|
||||
return TlsGetValue(tlsKey);
|
||||
#else // CV_USE_FLS
|
||||
return FlsGetValue(tlsKey);
|
||||
#endif // CV_USE_FLS
|
||||
}
|
||||
void TlsAbstraction::SetData(void *pData)
|
||||
{
|
||||
#ifndef CV_USE_FLS
|
||||
CV_Assert(TlsSetValue(tlsKey, pData) == TRUE);
|
||||
#else // CV_USE_FLS
|
||||
CV_Assert(FlsSetValue(tlsKey, pData) == TRUE);
|
||||
#endif // CV_USE_FLS
|
||||
}
|
||||
#endif
|
||||
#endif // WINRT
|
||||
#else // _WIN32
|
||||
static void opencv_tls_destructor(void* pData);
|
||||
TlsAbstraction::TlsAbstraction()
|
||||
@ -1611,7 +1634,14 @@ static void opencv_tls_destructor(void* pData)
|
||||
{
|
||||
getTlsStorage().releaseThread(pData);
|
||||
}
|
||||
#endif
|
||||
#else // _WIN32
|
||||
#ifdef CV_USE_FLS
|
||||
static void WINAPI opencv_fls_destructor(void* pData)
|
||||
{
|
||||
getTlsStorage().releaseThread(pData);
|
||||
}
|
||||
#endif // CV_USE_FLS
|
||||
#endif // _WIN32
|
||||
|
||||
} // namespace details
|
||||
using namespace details;
|
||||
|
@ -719,29 +719,45 @@ template <> int PyrUpVecV<float, float>(float** src, float** dst, int width)
|
||||
|
||||
#endif
|
||||
|
||||
template<class CastOp>
|
||||
struct PyrDownInvoker : ParallelLoopBody
|
||||
{
|
||||
PyrDownInvoker(const Mat& src, const Mat& dst, int borderType, int **tabR, int **tabM, int **tabL)
|
||||
{
|
||||
_src = &src;
|
||||
_dst = &dst;
|
||||
_borderType = borderType;
|
||||
_tabR = tabR;
|
||||
_tabM = tabM;
|
||||
_tabL = tabL;
|
||||
}
|
||||
|
||||
void operator()(const Range& range) const CV_OVERRIDE;
|
||||
|
||||
int **_tabR;
|
||||
int **_tabM;
|
||||
int **_tabL;
|
||||
const Mat *_src;
|
||||
const Mat *_dst;
|
||||
int _borderType;
|
||||
};
|
||||
|
||||
template<class CastOp> void
|
||||
pyrDown_( const Mat& _src, Mat& _dst, int borderType )
|
||||
{
|
||||
const int PD_SZ = 5;
|
||||
typedef typename CastOp::type1 WT;
|
||||
typedef typename CastOp::rtype T;
|
||||
|
||||
CV_Assert( !_src.empty() );
|
||||
Size ssize = _src.size(), dsize = _dst.size();
|
||||
int cn = _src.channels();
|
||||
int bufstep = (int)alignSize(dsize.width*cn, 16);
|
||||
AutoBuffer<WT> _buf(bufstep*PD_SZ + 16);
|
||||
WT* buf = alignPtr((WT*)_buf.data(), 16);
|
||||
|
||||
int tabL[CV_CN_MAX*(PD_SZ+2)], tabR[CV_CN_MAX*(PD_SZ+2)];
|
||||
AutoBuffer<int> _tabM(dsize.width*cn);
|
||||
int* tabM = _tabM.data();
|
||||
WT* rows[PD_SZ];
|
||||
CastOp castOp;
|
||||
|
||||
CV_Assert( ssize.width > 0 && ssize.height > 0 &&
|
||||
std::abs(dsize.width*2 - ssize.width) <= 2 &&
|
||||
std::abs(dsize.height*2 - ssize.height) <= 2 );
|
||||
int sy0 = -PD_SZ/2, sy = sy0, width0 = std::min((ssize.width-PD_SZ/2-1)/2 + 1, dsize.width);
|
||||
int width0 = std::min((ssize.width-PD_SZ/2-1)/2 + 1, dsize.width);
|
||||
|
||||
for (int x = 0; x <= PD_SZ+1; x++)
|
||||
{
|
||||
@ -754,27 +770,51 @@ pyrDown_( const Mat& _src, Mat& _dst, int borderType )
|
||||
}
|
||||
}
|
||||
|
||||
for (int x = 0; x < dsize.width*cn; x++)
|
||||
tabM[x] = (x/cn)*2*cn + x % cn;
|
||||
|
||||
int *tabLPtr = tabL;
|
||||
int *tabRPtr = tabR;
|
||||
|
||||
cv::parallel_for_(Range(0,dsize.height), cv::PyrDownInvoker<CastOp>(_src, _dst, borderType, &tabRPtr, &tabM, &tabLPtr), cv::getNumThreads());
|
||||
}
|
||||
|
||||
template<class CastOp>
|
||||
void PyrDownInvoker<CastOp>::operator()(const Range& range) const
|
||||
{
|
||||
const int PD_SZ = 5;
|
||||
typedef typename CastOp::type1 WT;
|
||||
typedef typename CastOp::rtype T;
|
||||
Size ssize = _src->size(), dsize = _dst->size();
|
||||
int cn = _src->channels();
|
||||
int bufstep = (int)alignSize(dsize.width*cn, 16);
|
||||
AutoBuffer<WT> _buf(bufstep*PD_SZ + 16);
|
||||
WT* buf = alignPtr((WT*)_buf.data(), 16);
|
||||
WT* rows[PD_SZ];
|
||||
CastOp castOp;
|
||||
|
||||
int sy0 = -PD_SZ/2, sy = range.start * 2 + sy0, width0 = std::min((ssize.width-PD_SZ/2-1)/2 + 1, dsize.width);
|
||||
|
||||
ssize.width *= cn;
|
||||
dsize.width *= cn;
|
||||
width0 *= cn;
|
||||
|
||||
for (int x = 0; x < dsize.width; x++)
|
||||
tabM[x] = (x/cn)*2*cn + x % cn;
|
||||
|
||||
for (int y = 0; y < dsize.height; y++)
|
||||
for (int y = range.start; y < range.end; y++)
|
||||
{
|
||||
T* dst = _dst.ptr<T>(y);
|
||||
T* dst = (T*)_dst->ptr<T>(y);
|
||||
WT *row0, *row1, *row2, *row3, *row4;
|
||||
|
||||
// fill the ring buffer (horizontal convolution and decimation)
|
||||
for( ; sy <= y*2 + 2; sy++ )
|
||||
int sy_limit = y*2 + 2;
|
||||
for( ; sy <= sy_limit; sy++ )
|
||||
{
|
||||
WT* row = buf + ((sy - sy0) % PD_SZ)*bufstep;
|
||||
int _sy = borderInterpolate(sy, ssize.height, borderType);
|
||||
const T* src = _src.ptr<T>(_sy);
|
||||
int _sy = borderInterpolate(sy, ssize.height, _borderType);
|
||||
const T* src = _src->ptr<T>(_sy);
|
||||
|
||||
do {
|
||||
int x = 0;
|
||||
const int* tabL = *_tabL;
|
||||
for( ; x < cn; x++ )
|
||||
{
|
||||
row[x] = src[tabL[x+cn*2]]*6 + (src[tabL[x+cn]] + src[tabL[x+cn*3]])*4 +
|
||||
@ -832,13 +872,14 @@ pyrDown_( const Mat& _src, Mat& _dst, int borderType )
|
||||
{
|
||||
for( ; x < width0; x++ )
|
||||
{
|
||||
int sx = tabM[x];
|
||||
int sx = (*_tabM)[x];
|
||||
row[x] = src[sx]*6 + (src[sx - cn] + src[sx + cn])*4 +
|
||||
src[sx - cn*2] + src[sx + cn*2];
|
||||
}
|
||||
}
|
||||
|
||||
// tabR
|
||||
const int* tabR = *_tabR;
|
||||
for (int x_ = 0; x < dsize.width; x++, x_++)
|
||||
{
|
||||
row[x] = src[tabR[x_+cn*2]]*6 + (src[tabR[x_+cn]] + src[tabR[x_+cn*3]])*4 +
|
||||
|
@ -93,86 +93,6 @@ ignore_list = ['locate', #int&
|
||||
'meanShift' #Rect&
|
||||
]
|
||||
|
||||
# Classes and methods whitelist
|
||||
core = {'': ['absdiff', 'add', 'addWeighted', 'bitwise_and', 'bitwise_not', 'bitwise_or', 'bitwise_xor', 'cartToPolar',\
|
||||
'compare', 'convertScaleAbs', 'copyMakeBorder', 'countNonZero', 'determinant', 'dft', 'divide', 'eigen', \
|
||||
'exp', 'flip', 'getOptimalDFTSize','gemm', 'hconcat', 'inRange', 'invert', 'kmeans', 'log', 'magnitude', \
|
||||
'max', 'mean', 'meanStdDev', 'merge', 'min', 'minMaxLoc', 'mixChannels', 'multiply', 'norm', 'normalize', \
|
||||
'perspectiveTransform', 'polarToCart', 'pow', 'randn', 'randu', 'reduce', 'repeat', 'rotate', 'setIdentity', 'setRNGSeed', \
|
||||
'solve', 'solvePoly', 'split', 'sqrt', 'subtract', 'trace', 'transform', 'transpose', 'vconcat'],
|
||||
'Algorithm': []}
|
||||
|
||||
imgproc = {'': ['Canny', 'GaussianBlur', 'Laplacian', 'HoughLines', 'HoughLinesP', 'HoughCircles', 'Scharr','Sobel', \
|
||||
'adaptiveThreshold','approxPolyDP','arcLength','bilateralFilter','blur','boundingRect','boxFilter',\
|
||||
'calcBackProject','calcHist','circle','compareHist','connectedComponents','connectedComponentsWithStats', \
|
||||
'contourArea', 'convexHull', 'convexityDefects', 'cornerHarris','cornerMinEigenVal','createCLAHE', \
|
||||
'createLineSegmentDetector','cvtColor','demosaicing','dilate', 'distanceTransform','distanceTransformWithLabels', \
|
||||
'drawContours','ellipse','ellipse2Poly','equalizeHist','erode', 'filter2D', 'findContours','fitEllipse', \
|
||||
'fitLine', 'floodFill','getAffineTransform', 'getPerspectiveTransform', 'getRotationMatrix2D', 'getStructuringElement', \
|
||||
'goodFeaturesToTrack','grabCut','initUndistortRectifyMap', 'integral','integral2', 'isContourConvex', 'line', \
|
||||
'matchShapes', 'matchTemplate','medianBlur', 'minAreaRect', 'minEnclosingCircle', 'moments', 'morphologyEx', \
|
||||
'pointPolygonTest', 'putText','pyrDown','pyrUp','rectangle','remap', 'resize','sepFilter2D','threshold', \
|
||||
'undistort','warpAffine','warpPerspective','warpPolar','watershed', \
|
||||
'fillPoly', 'fillConvexPoly'],
|
||||
'CLAHE': ['apply', 'collectGarbage', 'getClipLimit', 'getTilesGridSize', 'setClipLimit', 'setTilesGridSize']}
|
||||
|
||||
objdetect = {'': ['groupRectangles'],
|
||||
'HOGDescriptor': ['load', 'HOGDescriptor', 'getDefaultPeopleDetector', 'getDaimlerPeopleDetector', 'setSVMDetector', 'detectMultiScale'],
|
||||
'CascadeClassifier': ['load', 'detectMultiScale2', 'CascadeClassifier', 'detectMultiScale3', 'empty', 'detectMultiScale']}
|
||||
|
||||
video = {'': ['CamShift', 'calcOpticalFlowFarneback', 'calcOpticalFlowPyrLK', 'createBackgroundSubtractorMOG2', \
|
||||
'findTransformECC', 'meanShift'],
|
||||
'BackgroundSubtractorMOG2': ['BackgroundSubtractorMOG2', 'apply'],
|
||||
'BackgroundSubtractor': ['apply', 'getBackgroundImage']}
|
||||
|
||||
dnn = {'dnn_Net': ['setInput', 'forward'],
|
||||
'': ['readNetFromCaffe', 'readNetFromTensorflow', 'readNetFromTorch', 'readNetFromDarknet',
|
||||
'readNetFromONNX', 'readNet', 'blobFromImage']}
|
||||
|
||||
features2d = {'Feature2D': ['detect', 'compute', 'detectAndCompute', 'descriptorSize', 'descriptorType', 'defaultNorm', 'empty', 'getDefaultName'],
|
||||
'BRISK': ['create', 'getDefaultName'],
|
||||
'ORB': ['create', 'setMaxFeatures', 'setScaleFactor', 'setNLevels', 'setEdgeThreshold', 'setFirstLevel', 'setWTA_K', 'setScoreType', 'setPatchSize', 'getFastThreshold', 'getDefaultName'],
|
||||
'MSER': ['create', 'detectRegions', 'setDelta', 'getDelta', 'setMinArea', 'getMinArea', 'setMaxArea', 'getMaxArea', 'setPass2Only', 'getPass2Only', 'getDefaultName'],
|
||||
'FastFeatureDetector': ['create', 'setThreshold', 'getThreshold', 'setNonmaxSuppression', 'getNonmaxSuppression', 'setType', 'getType', 'getDefaultName'],
|
||||
'AgastFeatureDetector': ['create', 'setThreshold', 'getThreshold', 'setNonmaxSuppression', 'getNonmaxSuppression', 'setType', 'getType', 'getDefaultName'],
|
||||
'GFTTDetector': ['create', 'setMaxFeatures', 'getMaxFeatures', 'setQualityLevel', 'getQualityLevel', 'setMinDistance', 'getMinDistance', 'setBlockSize', 'getBlockSize', 'setHarrisDetector', 'getHarrisDetector', 'setK', 'getK', 'getDefaultName'],
|
||||
# 'SimpleBlobDetector': ['create'],
|
||||
'KAZE': ['create', 'setExtended', 'getExtended', 'setUpright', 'getUpright', 'setThreshold', 'getThreshold', 'setNOctaves', 'getNOctaves', 'setNOctaveLayers', 'getNOctaveLayers', 'setDiffusivity', 'getDiffusivity', 'getDefaultName'],
|
||||
'AKAZE': ['create', 'setDescriptorType', 'getDescriptorType', 'setDescriptorSize', 'getDescriptorSize', 'setDescriptorChannels', 'getDescriptorChannels', 'setThreshold', 'getThreshold', 'setNOctaves', 'getNOctaves', 'setNOctaveLayers', 'getNOctaveLayers', 'setDiffusivity', 'getDiffusivity', 'getDefaultName'],
|
||||
'DescriptorMatcher': ['add', 'clear', 'empty', 'isMaskSupported', 'train', 'match', 'knnMatch', 'radiusMatch', 'clone', 'create'],
|
||||
'BFMatcher': ['isMaskSupported', 'create'],
|
||||
'': ['drawKeypoints', 'drawMatches', 'drawMatchesKnn']}
|
||||
|
||||
photo = {'': ['createAlignMTB', 'createCalibrateDebevec', 'createCalibrateRobertson', \
|
||||
'createMergeDebevec', 'createMergeMertens', 'createMergeRobertson', \
|
||||
'createTonemapDrago', 'createTonemapMantiuk', 'createTonemapReinhard', 'inpaint'],
|
||||
'CalibrateCRF': ['process'],
|
||||
'AlignMTB' : ['calculateShift', 'shiftMat', 'computeBitmaps', 'getMaxBits', 'setMaxBits', \
|
||||
'getExcludeRange', 'setExcludeRange', 'getCut', 'setCut'],
|
||||
'CalibrateDebevec' : ['getLambda', 'setLambda', 'getSamples', 'setSamples', 'getRandom', 'setRandom'],
|
||||
'CalibrateRobertson' : ['getMaxIter', 'setMaxIter', 'getThreshold', 'setThreshold', 'getRadiance'],
|
||||
'MergeExposures' : ['process'],
|
||||
'MergeDebevec' : ['process'],
|
||||
'MergeMertens' : ['process', 'getContrastWeight', 'setContrastWeight', 'getSaturationWeight', \
|
||||
'setSaturationWeight', 'getExposureWeight', 'setExposureWeight'],
|
||||
'MergeRobertson' : ['process'],
|
||||
'Tonemap' : ['process' , 'getGamma', 'setGamma'],
|
||||
'TonemapDrago' : ['getSaturation', 'setSaturation', 'getBias', 'setBias', \
|
||||
'getSigmaColor', 'setSigmaColor', 'getSigmaSpace','setSigmaSpace'],
|
||||
'TonemapMantiuk' : ['getScale', 'setScale', 'getSaturation', 'setSaturation'],
|
||||
'TonemapReinhard' : ['getIntensity', 'setIntensity', 'getLightAdaptation', 'setLightAdaptation', \
|
||||
'getColorAdaptation', 'setColorAdaptation']
|
||||
}
|
||||
|
||||
aruco = {'': ['detectMarkers', 'drawDetectedMarkers', 'drawAxis', 'estimatePoseSingleMarkers', 'estimatePoseBoard', 'estimatePoseCharucoBoard', 'interpolateCornersCharuco', 'drawDetectedCornersCharuco'],
|
||||
'aruco_Dictionary': ['get', 'drawMarker'],
|
||||
'aruco_Board': ['create'],
|
||||
'aruco_GridBoard': ['create', 'draw'],
|
||||
'aruco_CharucoBoard': ['create', 'draw'],
|
||||
}
|
||||
|
||||
calib3d = {'': ['findHomography', 'calibrateCameraExtended', 'drawFrameAxes', 'estimateAffine2D', 'getDefaultNewCameraMatrix', 'initUndistortRectifyMap', 'Rodrigues']}
|
||||
|
||||
def makeWhiteList(module_list):
|
||||
wl = {}
|
||||
for m in module_list:
|
||||
@ -183,7 +103,9 @@ def makeWhiteList(module_list):
|
||||
wl[k] = m[k]
|
||||
return wl
|
||||
|
||||
white_list = makeWhiteList([core, imgproc, objdetect, video, dnn, features2d, photo, aruco, calib3d])
|
||||
white_list = None
|
||||
exec(open(os.environ["OPENCV_JS_WHITELIST"]).read())
|
||||
assert(white_list)
|
||||
|
||||
# Features to be exported
|
||||
export_enums = False
|
||||
|
@ -73,7 +73,7 @@ struct PyOpenCV_Converter< ${cname} >
|
||||
return true;
|
||||
}
|
||||
${mappable_code}
|
||||
failmsg("Expected ${cname} for argument '%%s'", name);
|
||||
failmsg("Expected ${cname} for argument '%s'", name);
|
||||
return false;
|
||||
}
|
||||
};
|
||||
|
@ -30,7 +30,8 @@ class TestInfo(object):
|
||||
self.status = xmlnode.getAttribute("status")
|
||||
|
||||
if self.name.startswith("DISABLED_"):
|
||||
self.status = "disabled"
|
||||
if self.status == 'notrun':
|
||||
self.status = "disabled"
|
||||
self.fixture = self.fixture.replace("DISABLED_", "")
|
||||
self.name = self.name.replace("DISABLED_", "")
|
||||
self.properties = {
|
||||
|
@ -59,9 +59,18 @@ static void calcSharrDeriv(const cv::Mat& src, cv::Mat& dst)
|
||||
{
|
||||
using namespace cv;
|
||||
using cv::detail::deriv_type;
|
||||
int rows = src.rows, cols = src.cols, cn = src.channels(), colsn = cols*cn, depth = src.depth();
|
||||
int rows = src.rows, cols = src.cols, cn = src.channels(), depth = src.depth();
|
||||
CV_Assert(depth == CV_8U);
|
||||
dst.create(rows, cols, CV_MAKETYPE(DataType<deriv_type>::depth, cn*2));
|
||||
parallel_for_(Range(0, rows), cv::detail::SharrDerivInvoker(src, dst), cv::getNumThreads());
|
||||
}
|
||||
|
||||
}//namespace
|
||||
|
||||
void cv::detail::SharrDerivInvoker::operator()(const Range& range) const
|
||||
{
|
||||
using cv::detail::deriv_type;
|
||||
int rows = src.rows, cols = src.cols, cn = src.channels(), colsn = cols*cn;
|
||||
|
||||
int x, y, delta = (int)alignSize((cols + 2)*cn, 16);
|
||||
AutoBuffer<deriv_type> _tempBuf(delta*2 + 64);
|
||||
@ -71,12 +80,12 @@ static void calcSharrDeriv(const cv::Mat& src, cv::Mat& dst)
|
||||
v_int16x8 c3 = v_setall_s16(3), c10 = v_setall_s16(10);
|
||||
#endif
|
||||
|
||||
for( y = 0; y < rows; y++ )
|
||||
for( y = range.start; y < range.end; y++ )
|
||||
{
|
||||
const uchar* srow0 = src.ptr<uchar>(y > 0 ? y-1 : rows > 1 ? 1 : 0);
|
||||
const uchar* srow1 = src.ptr<uchar>(y);
|
||||
const uchar* srow2 = src.ptr<uchar>(y < rows-1 ? y+1 : rows > 1 ? rows-2 : 0);
|
||||
deriv_type* drow = dst.ptr<deriv_type>(y);
|
||||
deriv_type* drow = (deriv_type *)dst.ptr<deriv_type>(y);
|
||||
|
||||
// do vertical convolution
|
||||
x = 0;
|
||||
@ -141,8 +150,6 @@ static void calcSharrDeriv(const cv::Mat& src, cv::Mat& dst)
|
||||
}
|
||||
}
|
||||
|
||||
}//namespace
|
||||
|
||||
cv::detail::LKTrackerInvoker::LKTrackerInvoker(
|
||||
const Mat& _prevImg, const Mat& _prevDeriv, const Mat& _nextImg,
|
||||
const Point2f* _prevPts, Point2f* _nextPts,
|
||||
|
@ -7,6 +7,18 @@ namespace detail
|
||||
|
||||
typedef short deriv_type;
|
||||
|
||||
struct SharrDerivInvoker : ParallelLoopBody
|
||||
{
|
||||
SharrDerivInvoker(const Mat& _src, const Mat& _dst)
|
||||
: src(_src), dst(_dst)
|
||||
{ }
|
||||
|
||||
void operator()(const Range& range) const CV_OVERRIDE;
|
||||
|
||||
const Mat& src;
|
||||
const Mat& dst;
|
||||
};
|
||||
|
||||
struct LKTrackerInvoker : ParallelLoopBody
|
||||
{
|
||||
LKTrackerInvoker( const Mat& _prevImg, const Mat& _prevDeriv, const Mat& _nextImg,
|
||||
|
@ -140,6 +140,8 @@ class Builder:
|
||||
"-DBUILD_PACKAGE=OFF",
|
||||
"-DBUILD_TESTS=OFF",
|
||||
"-DBUILD_PERF_TESTS=OFF"]
|
||||
if self.options.cmake_option:
|
||||
cmd += self.options.cmake_option
|
||||
if self.options.build_doc:
|
||||
cmd.append("-DBUILD_DOCS=ON")
|
||||
else:
|
||||
@ -180,6 +182,8 @@ class Builder:
|
||||
flags += "-s DISABLE_EXCEPTION_CATCHING=0 "
|
||||
if self.options.simd:
|
||||
flags += "-msimd128 "
|
||||
if self.options.build_flags:
|
||||
flags += self.options.build_flags
|
||||
return flags
|
||||
|
||||
def config(self):
|
||||
@ -223,12 +227,22 @@ if __name__ == "__main__":
|
||||
parser.add_argument('--skip_config', action="store_true", help="Skip cmake config")
|
||||
parser.add_argument('--config_only', action="store_true", help="Only do cmake config")
|
||||
parser.add_argument('--enable_exception', action="store_true", help="Enable exception handling")
|
||||
# Use flag --cmake option="-D...=ON" only for one argument, if you would add more changes write new cmake_option flags
|
||||
parser.add_argument('--cmake_option', action='append', help="Append CMake options")
|
||||
# Use flag --build_flags="-s USE_PTHREADS=0 -Os" for one and more arguments as in the example
|
||||
parser.add_argument('--build_flags', help="Append Emscripten build options")
|
||||
parser.add_argument('--build_wasm_intrin_test', default=False, action="store_true", help="Build WASM intrin tests")
|
||||
# Write a path to modify file like argument of this flag
|
||||
parser.add_argument('--config', default=os.path.join(os.path.dirname(os.path.abspath(__file__)), 'opencv_js.config.py'),
|
||||
help="Specify configuration file with own list of exported into JS functions")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
log.basicConfig(format='%(message)s', level=log.DEBUG)
|
||||
log.debug("Args: %s", args)
|
||||
|
||||
os.environ["OPENCV_JS_WHITELIST"] = args.config
|
||||
|
||||
if args.emscripten_dir is None:
|
||||
log.info("Cannot get Emscripten path, please specify it either by EMSCRIPTEN environment variable or --emscripten_dir option.")
|
||||
sys.exit(-1)
|
||||
|
82
platforms/js/opencv_js.config.py
Normal file
82
platforms/js/opencv_js.config.py
Normal file
@ -0,0 +1,82 @@
|
||||
# Classes and methods whitelist
|
||||
core = {'': ['absdiff', 'add', 'addWeighted', 'bitwise_and', 'bitwise_not', 'bitwise_or', 'bitwise_xor', 'cartToPolar',\
|
||||
'compare', 'convertScaleAbs', 'copyMakeBorder', 'countNonZero', 'determinant', 'dft', 'divide', 'eigen', \
|
||||
'exp', 'flip', 'getOptimalDFTSize','gemm', 'hconcat', 'inRange', 'invert', 'kmeans', 'log', 'magnitude', \
|
||||
'max', 'mean', 'meanStdDev', 'merge', 'min', 'minMaxLoc', 'mixChannels', 'multiply', 'norm', 'normalize', \
|
||||
'perspectiveTransform', 'polarToCart', 'pow', 'randn', 'randu', 'reduce', 'repeat', 'rotate', 'setIdentity', 'setRNGSeed', \
|
||||
'solve', 'solvePoly', 'split', 'sqrt', 'subtract', 'trace', 'transform', 'transpose', 'vconcat'],
|
||||
'Algorithm': []}
|
||||
|
||||
imgproc = {'': ['Canny', 'GaussianBlur', 'Laplacian', 'HoughLines', 'HoughLinesP', 'HoughCircles', 'Scharr','Sobel', \
|
||||
'adaptiveThreshold','approxPolyDP','arcLength','bilateralFilter','blur','boundingRect','boxFilter',\
|
||||
'calcBackProject','calcHist','circle','compareHist','connectedComponents','connectedComponentsWithStats', \
|
||||
'contourArea', 'convexHull', 'convexityDefects', 'cornerHarris','cornerMinEigenVal','createCLAHE', \
|
||||
'createLineSegmentDetector','cvtColor','demosaicing','dilate', 'distanceTransform','distanceTransformWithLabels', \
|
||||
'drawContours','ellipse','ellipse2Poly','equalizeHist','erode', 'filter2D', 'findContours','fitEllipse', \
|
||||
'fitLine', 'floodFill','getAffineTransform', 'getPerspectiveTransform', 'getRotationMatrix2D', 'getStructuringElement', \
|
||||
'goodFeaturesToTrack','grabCut','initUndistortRectifyMap', 'integral','integral2', 'isContourConvex', 'line', \
|
||||
'matchShapes', 'matchTemplate','medianBlur', 'minAreaRect', 'minEnclosingCircle', 'moments', 'morphologyEx', \
|
||||
'pointPolygonTest', 'putText','pyrDown','pyrUp','rectangle','remap', 'resize','sepFilter2D','threshold', \
|
||||
'undistort','warpAffine','warpPerspective','warpPolar','watershed', \
|
||||
'fillPoly', 'fillConvexPoly'],
|
||||
'CLAHE': ['apply', 'collectGarbage', 'getClipLimit', 'getTilesGridSize', 'setClipLimit', 'setTilesGridSize']}
|
||||
|
||||
objdetect = {'': ['groupRectangles'],
|
||||
'HOGDescriptor': ['load', 'HOGDescriptor', 'getDefaultPeopleDetector', 'getDaimlerPeopleDetector', 'setSVMDetector', 'detectMultiScale'],
|
||||
'CascadeClassifier': ['load', 'detectMultiScale2', 'CascadeClassifier', 'detectMultiScale3', 'empty', 'detectMultiScale']}
|
||||
|
||||
video = {'': ['CamShift', 'calcOpticalFlowFarneback', 'calcOpticalFlowPyrLK', 'createBackgroundSubtractorMOG2', \
|
||||
'findTransformECC', 'meanShift'],
|
||||
'BackgroundSubtractorMOG2': ['BackgroundSubtractorMOG2', 'apply'],
|
||||
'BackgroundSubtractor': ['apply', 'getBackgroundImage']}
|
||||
|
||||
dnn = {'dnn_Net': ['setInput', 'forward'],
|
||||
'': ['readNetFromCaffe', 'readNetFromTensorflow', 'readNetFromTorch', 'readNetFromDarknet',
|
||||
'readNetFromONNX', 'readNet', 'blobFromImage']}
|
||||
|
||||
features2d = {'Feature2D': ['detect', 'compute', 'detectAndCompute', 'descriptorSize', 'descriptorType', 'defaultNorm', 'empty', 'getDefaultName'],
|
||||
'BRISK': ['create', 'getDefaultName'],
|
||||
'ORB': ['create', 'setMaxFeatures', 'setScaleFactor', 'setNLevels', 'setEdgeThreshold', 'setFirstLevel', 'setWTA_K', 'setScoreType', 'setPatchSize', 'getFastThreshold', 'getDefaultName'],
|
||||
'MSER': ['create', 'detectRegions', 'setDelta', 'getDelta', 'setMinArea', 'getMinArea', 'setMaxArea', 'getMaxArea', 'setPass2Only', 'getPass2Only', 'getDefaultName'],
|
||||
'FastFeatureDetector': ['create', 'setThreshold', 'getThreshold', 'setNonmaxSuppression', 'getNonmaxSuppression', 'setType', 'getType', 'getDefaultName'],
|
||||
'AgastFeatureDetector': ['create', 'setThreshold', 'getThreshold', 'setNonmaxSuppression', 'getNonmaxSuppression', 'setType', 'getType', 'getDefaultName'],
|
||||
'GFTTDetector': ['create', 'setMaxFeatures', 'getMaxFeatures', 'setQualityLevel', 'getQualityLevel', 'setMinDistance', 'getMinDistance', 'setBlockSize', 'getBlockSize', 'setHarrisDetector', 'getHarrisDetector', 'setK', 'getK', 'getDefaultName'],
|
||||
# 'SimpleBlobDetector': ['create'],
|
||||
'KAZE': ['create', 'setExtended', 'getExtended', 'setUpright', 'getUpright', 'setThreshold', 'getThreshold', 'setNOctaves', 'getNOctaves', 'setNOctaveLayers', 'getNOctaveLayers', 'setDiffusivity', 'getDiffusivity', 'getDefaultName'],
|
||||
'AKAZE': ['create', 'setDescriptorType', 'getDescriptorType', 'setDescriptorSize', 'getDescriptorSize', 'setDescriptorChannels', 'getDescriptorChannels', 'setThreshold', 'getThreshold', 'setNOctaves', 'getNOctaves', 'setNOctaveLayers', 'getNOctaveLayers', 'setDiffusivity', 'getDiffusivity', 'getDefaultName'],
|
||||
'DescriptorMatcher': ['add', 'clear', 'empty', 'isMaskSupported', 'train', 'match', 'knnMatch', 'radiusMatch', 'clone', 'create'],
|
||||
'BFMatcher': ['isMaskSupported', 'create'],
|
||||
'': ['drawKeypoints', 'drawMatches', 'drawMatchesKnn']}
|
||||
|
||||
photo = {'': ['createAlignMTB', 'createCalibrateDebevec', 'createCalibrateRobertson', \
|
||||
'createMergeDebevec', 'createMergeMertens', 'createMergeRobertson', \
|
||||
'createTonemapDrago', 'createTonemapMantiuk', 'createTonemapReinhard', 'inpaint'],
|
||||
'CalibrateCRF': ['process'],
|
||||
'AlignMTB' : ['calculateShift', 'shiftMat', 'computeBitmaps', 'getMaxBits', 'setMaxBits', \
|
||||
'getExcludeRange', 'setExcludeRange', 'getCut', 'setCut'],
|
||||
'CalibrateDebevec' : ['getLambda', 'setLambda', 'getSamples', 'setSamples', 'getRandom', 'setRandom'],
|
||||
'CalibrateRobertson' : ['getMaxIter', 'setMaxIter', 'getThreshold', 'setThreshold', 'getRadiance'],
|
||||
'MergeExposures' : ['process'],
|
||||
'MergeDebevec' : ['process'],
|
||||
'MergeMertens' : ['process', 'getContrastWeight', 'setContrastWeight', 'getSaturationWeight', \
|
||||
'setSaturationWeight', 'getExposureWeight', 'setExposureWeight'],
|
||||
'MergeRobertson' : ['process'],
|
||||
'Tonemap' : ['process' , 'getGamma', 'setGamma'],
|
||||
'TonemapDrago' : ['getSaturation', 'setSaturation', 'getBias', 'setBias', \
|
||||
'getSigmaColor', 'setSigmaColor', 'getSigmaSpace','setSigmaSpace'],
|
||||
'TonemapMantiuk' : ['getScale', 'setScale', 'getSaturation', 'setSaturation'],
|
||||
'TonemapReinhard' : ['getIntensity', 'setIntensity', 'getLightAdaptation', 'setLightAdaptation', \
|
||||
'getColorAdaptation', 'setColorAdaptation']
|
||||
}
|
||||
|
||||
aruco = {'': ['detectMarkers', 'drawDetectedMarkers', 'drawAxis', 'estimatePoseSingleMarkers', 'estimatePoseBoard', 'estimatePoseCharucoBoard', 'interpolateCornersCharuco', 'drawDetectedCornersCharuco'],
|
||||
'aruco_Dictionary': ['get', 'drawMarker'],
|
||||
'aruco_Board': ['create'],
|
||||
'aruco_GridBoard': ['create', 'draw'],
|
||||
'aruco_CharucoBoard': ['create', 'draw'],
|
||||
}
|
||||
|
||||
calib3d = {'': ['findHomography', 'calibrateCameraExtended', 'drawFrameAxes', 'estimateAffine2D', 'getDefaultNewCameraMatrix', 'initUndistortRectifyMap', 'Rodrigues']}
|
||||
|
||||
|
||||
white_list = makeWhiteList([core, imgproc, objdetect, video, dnn, features2d, photo, aruco, calib3d])
|
@ -247,10 +247,19 @@ int main(int argc, char** argv)
|
||||
//copyMakeBorder(img2, img2p, 0, 0, numberOfDisparities, 0, IPL_BORDER_REPLICATE);
|
||||
|
||||
int64 t = getTickCount();
|
||||
float disparity_multiplier = 1.0f;
|
||||
if( alg == STEREO_BM )
|
||||
{
|
||||
bm->compute(img1, img2, disp);
|
||||
if (disp.type() == CV_16S)
|
||||
disparity_multiplier = 16.0f;
|
||||
}
|
||||
else if( alg == STEREO_SGBM || alg == STEREO_HH || alg == STEREO_3WAY )
|
||||
{
|
||||
sgbm->compute(img1, img2, disp);
|
||||
if (disp.type() == CV_16S)
|
||||
disparity_multiplier = 16.0f;
|
||||
}
|
||||
t = getTickCount() - t;
|
||||
printf("Time elapsed: %fms\n", t*1000/getTickFrequency());
|
||||
|
||||
@ -281,7 +290,9 @@ int main(int argc, char** argv)
|
||||
printf("storing the point cloud...");
|
||||
fflush(stdout);
|
||||
Mat xyz;
|
||||
reprojectImageTo3D(disp, xyz, Q, true);
|
||||
Mat floatDisp;
|
||||
disp.convertTo(floatDisp, CV_32F, 1.0f / disparity_multiplier);
|
||||
reprojectImageTo3D(floatDisp, xyz, Q, true);
|
||||
saveXYZ(point_cloud_filename.c_str(), xyz);
|
||||
printf("\n");
|
||||
}
|
||||
|
Loading…
Reference in New Issue
Block a user