Merge pull request #25292 from kaingwade:features2d_parts_to_contrib

Features2d cleanup: Move several feature detectors and descriptors to opencv_contrib #25292

features2d cleanup: #24999

The PR moves KAZE, AKAZE, AgastFeatureDetector, BRISK and BOW to opencv_contrib/xfeatures2d.

Related PR: opencv/opencv_contrib#3709
This commit is contained in:
WU Jia 2024-10-10 22:10:22 +08:00 committed by GitHub
parent 9dbfba0fd8
commit ef98c25d60
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
75 changed files with 110 additions and 45699 deletions

View File

@ -1,12 +1,3 @@
@incollection{ABD12,
author = {Alcantarilla, Pablo Fern{\'a}ndez and Bartoli, Adrien and Davison, Andrew J},
title = {KAZE features},
booktitle = {Computer Vision--ECCV 2012},
year = {2012},
pages = {214--227},
publisher = {Springer},
url = {https://www.doc.ic.ac.uk/~ajd/Publications/alcantarilla_etal_eccv2012.pdf}
}
@article{ANB13,
author = {Pablo Fern{\'{a}}ndez Alcantarilla and Jes{\'{u}}s Nuevo and Adrien Bartoli},
editor = {Tilo Burghardt and Dima Damen and Walterio W. Mayol{-}Cuevas and Majid Mirmehdi},
@ -597,14 +588,6 @@
doi = {10.5201/ipol.2011.my-asift},
url = {http://www.ipol.im/pub/algo/my_affine_sift/}
}
@inproceedings{LCS11,
author = {Leutenegger, Stefan and Chli, Margarita and Siegwart, Roland Yves},
title = {BRISK: Binary robust invariant scalable keypoints},
booktitle = {Computer Vision (ICCV), 2011 IEEE International Conference on},
year = {2011},
pages = {2548--2555},
publisher = {IEEE}
}
@article{Louhichi07,
author = {Louhichi, H. and Fournel, T. and Lavest, J. M. and Ben Aissia, H.},
title = {Self-calibration of Scheimpflug cameras: an easy protocol},

View File

@ -1,92 +0,0 @@
BRIEF (Binary Robust Independent Elementary Features) {#tutorial_py_brief}
=====================================================
Goal
----
In this chapter
- We will see the basics of BRIEF algorithm
Theory
------
We know SIFT uses 128-dim vector for descriptors. Since it is using floating point numbers, it takes
basically 512 bytes. Similarly SURF also takes minimum of 256 bytes (for 64-dim). Creating such a
vector for thousands of features takes a lot of memory which are not feasible for resource-constraint
applications especially for embedded systems. Larger the memory, longer the time it takes for
matching.
But all these dimensions may not be needed for actual matching. We can compress it using several
methods like PCA, LDA etc. Even other methods like hashing using LSH (Locality Sensitive Hashing) is
used to convert these SIFT descriptors in floating point numbers to binary strings. These binary
strings are used to match features using Hamming distance. This provides better speed-up because
finding hamming distance is just applying XOR and bit count, which are very fast in modern CPUs with
SSE instructions. But here, we need to find the descriptors first, then only we can apply hashing,
which doesn't solve our initial problem on memory.
BRIEF comes into picture at this moment. It provides a shortcut to find the binary strings directly
without finding descriptors. It takes smoothened image patch and selects a set of \f$n_d\f$ (x,y)
location pairs in an unique way (explained in paper). Then some pixel intensity comparisons are done
on these location pairs. For eg, let first location pairs be \f$p\f$ and \f$q\f$. If \f$I(p) < I(q)\f$, then its
result is 1, else it is 0. This is applied for all the \f$n_d\f$ location pairs to get a
\f$n_d\f$-dimensional bitstring.
This \f$n_d\f$ can be 128, 256 or 512. OpenCV supports all of these, but by default, it would be 256
(OpenCV represents it in bytes. So the values will be 16, 32 and 64). So once you get this, you can
use Hamming Distance to match these descriptors.
One important point is that BRIEF is a feature descriptor, it doesn't provide any method to find the
features. So you will have to use any other feature detectors like SIFT, SURF etc. The paper
recommends to use CenSurE which is a fast detector and BRIEF works even slightly better for CenSurE
points than for SURF points.
In short, BRIEF is a faster method feature descriptor calculation and matching. It also provides
high recognition rate unless there is large in-plane rotation.
STAR(CenSurE) in OpenCV
------
STAR is a feature detector derived from CenSurE.
Unlike CenSurE however, which uses polygons like squares, hexagons and octagons to approach a circle,
Star emulates a circle with 2 overlapping squares: 1 upright and 1 45-degree rotated. These polygons are bi-level.
They can be seen as polygons with thick borders. The borders and the enclosed area have weights of opposing signs.
This has better computational characteristics than other scale-space detectors and it is capable of real-time implementation.
In contrast to SIFT and SURF, which find extrema at sub-sampled pixels that compromises accuracy at larger scales,
CenSurE creates a feature vector using full spatial resolution at all scales in the pyramid.
BRIEF in OpenCV
---------------
Below code shows the computation of BRIEF descriptors with the help of CenSurE detector.
note, that you need [opencv contrib](https://github.com/opencv/opencv_contrib)) to use this.
@code{.py}
import numpy as np
import cv2 as cv
from matplotlib import pyplot as plt
img = cv.imread('simple.jpg', cv.IMREAD_GRAYSCALE)
# Initiate FAST detector
star = cv.xfeatures2d.StarDetector_create()
# Initiate BRIEF extractor
brief = cv.xfeatures2d.BriefDescriptorExtractor_create()
# find the keypoints with STAR
kp = star.detect(img,None)
# compute the descriptors with BRIEF
kp, des = brief.compute(img, kp)
print( brief.descriptorSize() )
print( des.shape )
@endcode
The function brief.getDescriptorSize() gives the \f$n_d\f$ size used in bytes. By default it is 32. Next one
is matching, which will be done in another chapter.
Additional Resources
--------------------
-# Michael Calonder, Vincent Lepetit, Christoph Strecha, and Pascal Fua, "BRIEF: Binary Robust
Independent Elementary Features", 11th European Conference on Computer Vision (ECCV), Heraklion,
Crete. LNCS Springer, September 2010.
2. [LSH (Locality Sensitive Hashing)](https://en.wikipedia.org/wiki/Locality-sensitive_hashing) at wikipedia.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.7 KiB

View File

@ -1,157 +0,0 @@
Introduction to SURF (Speeded-Up Robust Features) {#tutorial_py_surf_intro}
=================================================
Goal
----
In this chapter,
- We will see the basics of SURF
- We will see SURF functionalities in OpenCV
Theory
------
In last chapter, we saw SIFT for keypoint detection and description. But it was comparatively slow
and people needed more speeded-up version. In 2006, three people, Bay, H., Tuytelaars, T. and Van
Gool, L, published another paper, "SURF: Speeded Up Robust Features" which introduced a new
algorithm called SURF. As name suggests, it is a speeded-up version of SIFT.
In SIFT, Lowe approximated Laplacian of Gaussian with Difference of Gaussian for finding
scale-space. SURF goes a little further and approximates LoG with Box Filter. Below image shows a
demonstration of such an approximation. One big advantage of this approximation is that, convolution
with box filter can be easily calculated with the help of integral images. And it can be done in
parallel for different scales. Also the SURF rely on determinant of Hessian matrix for both scale
and location.
![image](images/surf_boxfilter.jpg)
For orientation assignment, SURF uses wavelet responses in horizontal and vertical direction for a
neighbourhood of size 6s. Adequate gaussian weights are also applied to it. Then they are plotted in
a space as given in below image. The dominant orientation is estimated by calculating the sum of all
responses within a sliding orientation window of angle 60 degrees. Interesting thing is that,
wavelet response can be found out using integral images very easily at any scale. For many
applications, rotation invariance is not required, so no need of finding this orientation, which
speeds up the process. SURF provides such a functionality called Upright-SURF or U-SURF. It improves
speed and is robust upto \f$\pm 15^{\circ}\f$. OpenCV supports both, depending upon the flag,
**upright**. If it is 0, orientation is calculated. If it is 1, orientation is not calculated and it
is faster.
![image](images/surf_orientation.jpg)
For feature description, SURF uses Wavelet responses in horizontal and vertical direction (again,
use of integral images makes things easier). A neighbourhood of size 20sX20s is taken around the
keypoint where s is the size. It is divided into 4x4 subregions. For each subregion, horizontal and
vertical wavelet responses are taken and a vector is formed like this,
\f$v=( \sum{d_x}, \sum{d_y}, \sum{|d_x|}, \sum{|d_y|})\f$. This when represented as a vector gives SURF
feature descriptor with total 64 dimensions. Lower the dimension, higher the speed of computation
and matching, but provide better distinctiveness of features.
For more distinctiveness, SURF feature descriptor has an extended 128 dimension version. The sums of
\f$d_x\f$ and \f$|d_x|\f$ are computed separately for \f$d_y < 0\f$ and \f$d_y \geq 0\f$. Similarly, the sums of
\f$d_y\f$ and \f$|d_y|\f$ are split up according to the sign of \f$d_x\f$ , thereby doubling the number of
features. It doesn't add much computation complexity. OpenCV supports both by setting the value of
flag **extended** with 0 and 1 for 64-dim and 128-dim respectively (default is 128-dim)
Another important improvement is the use of sign of Laplacian (trace of Hessian Matrix) for
underlying interest point. It adds no computation cost since it is already computed during
detection. The sign of the Laplacian distinguishes bright blobs on dark backgrounds from the reverse
situation. In the matching stage, we only compare features if they have the same type of contrast
(as shown in image below). This minimal information allows for faster matching, without reducing the
descriptor's performance.
![image](images/surf_matching.jpg)
In short, SURF adds a lot of features to improve the speed in every step. Analysis shows it is 3
times faster than SIFT while performance is comparable to SIFT. SURF is good at handling images with
blurring and rotation, but not good at handling viewpoint change and illumination change.
SURF in OpenCV
--------------
OpenCV provides SURF functionalities just like SIFT. You initiate a SURF object with some optional
conditions like 64/128-dim descriptors, Upright/Normal SURF etc. All the details are well explained
in docs. Then as we did in SIFT, we can use SURF.detect(), SURF.compute() etc for finding keypoints
and descriptors.
First we will see a simple demo on how to find SURF keypoints and descriptors and draw it. All
examples are shown in Python terminal since it is just same as SIFT only.
@code{.py}
>>> img = cv.imread('fly.png', cv.IMREAD_GRAYSCALE)
# Create SURF object. You can specify params here or later.
# Here I set Hessian Threshold to 400
>>> surf = cv.xfeatures2d.SURF_create(400)
# Find keypoints and descriptors directly
>>> kp, des = surf.detectAndCompute(img,None)
>>> len(kp)
699
@endcode
1199 keypoints is too much to show in a picture. We reduce it to some 50 to draw it on an image.
While matching, we may need all those features, but not now. So we increase the Hessian Threshold.
@code{.py}
# Check present Hessian threshold
>>> print( surf.getHessianThreshold() )
400.0
# We set it to some 50000. Remember, it is just for representing in picture.
# In actual cases, it is better to have a value 300-500
>>> surf.setHessianThreshold(50000)
# Again compute keypoints and check its number.
>>> kp, des = surf.detectAndCompute(img,None)
>>> print( len(kp) )
47
@endcode
It is less than 50. Let's draw it on the image.
@code{.py}
>>> img2 = cv.drawKeypoints(img,kp,None,(255,0,0),4)
>>> plt.imshow(img2),plt.show()
@endcode
See the result below. You can see that SURF is more like a blob detector. It detects the white blobs
on wings of butterfly. You can test it with other images.
![image](images/surf_kp1.jpg)
Now I want to apply U-SURF, so that it won't find the orientation.
@code{.py}
# Check upright flag, if it False, set it to True
>>> print( surf.getUpright() )
False
>>> surf.setUpright(True)
# Recompute the feature points and draw it
>>> kp = surf.detect(img,None)
>>> img2 = cv.drawKeypoints(img,kp,None,(255,0,0),4)
>>> plt.imshow(img2),plt.show()
@endcode
See the results below. All the orientations are shown in same direction. It is faster than
previous. If you are working on cases where orientation is not a problem (like panorama stitching)
etc, this is better.
![image](images/surf_kp2.jpg)
Finally we check the descriptor size and change it to 128 if it is only 64-dim.
@code{.py}
# Find size of descriptor
>>> print( surf.descriptorSize() )
64
# That means flag, "extended" is False.
>>> surf.getExtended()
False
# So we make it to True to get 128-dim descriptors.
>>> surf.setExtended(True)
>>> kp, des = surf.detectAndCompute(img,None)
>>> print( surf.descriptorSize() )
128
>>> print( des.shape )
(47, 128)
@endcode
Remaining part is matching which we will do in another chapter.

View File

@ -22,28 +22,15 @@ Feature Detection and Description {#tutorial_py_table_of_contents_feature2d}
is not good enough when scale of image changes. Lowe developed a breakthrough method to find
scale-invariant features and it is called SIFT
- @subpage tutorial_py_surf_intro
SIFT is really good,
but not fast enough, so people came up with a speeded-up version called SURF.
- @subpage tutorial_py_fast
All the above feature
detection methods are good in some way. But they are not fast enough to work in real-time
applications like SLAM. There comes the FAST algorithm, which is really "FAST".
- @subpage tutorial_py_brief
SIFT uses a feature
descriptor with 128 floating point numbers. Consider thousands of such features. It takes lots of
memory and more time for matching. We can compress it to make it faster. But still we have to
calculate it first. There comes BRIEF which gives the shortcut to find binary descriptors with
less memory, faster matching, still higher recognition rate.
- @subpage tutorial_py_orb
SIFT and SURF are good in what they do, but what if you have to pay a few dollars every year to use them in your applications? Yeah, they are patented!!! To solve that problem, OpenCV devs came up with a new "FREE" alternative to SIFT & SURF, and that is ORB.
SURF is good in what it does, but what if you have to pay a few dollars every year to use it in your applications? Yeah, it is patented!!! To solve that problem, OpenCV devs came up with a new "FREE" alternative to SIFT & SURF, and that is ORB.
- @subpage tutorial_py_matcher

View File

@ -22,6 +22,8 @@ number of inliers (i.e. matches that fit in the given homography).
You can find expanded version of this example here:
<https://github.com/pablofdezalc/test_kaze_akaze_opencv>
\warning You need the [OpenCV contrib module *xfeatures2d*](https://github.com/opencv/opencv_contrib/tree/5.x/modules/xfeatures2d) to be able to use the AKAZE features.
Data
----
@ -42,7 +44,7 @@ You can find the images (*graf1.png*, *graf3.png*) and homography (*H1to3p.xml*)
@add_toggle_cpp
- **Downloadable code**: Click
[here](https://raw.githubusercontent.com/opencv/opencv/5.x/samples/cpp/tutorial_code/features2D/AKAZE_match.cpp)
[here](https://github.com/opencv/opencv/5.x/samples/cpp/tutorial_code/features2D/AKAZE_match.cpp)
- **Code at glance:**
@include samples/cpp/tutorial_code/features2D/AKAZE_match.cpp
@ -50,7 +52,7 @@ You can find the images (*graf1.png*, *graf3.png*) and homography (*H1to3p.xml*)
@add_toggle_java
- **Downloadable code**: Click
[here](https://raw.githubusercontent.com/opencv/opencv/5.x/samples/java/tutorial_code/features2D/akaze_matching/AKAZEMatchDemo.java)
[here](https://github.com/opencv/opencv/5.x/samples/java/tutorial_code/features2D/akaze_matching/AKAZEMatchDemo.java)
- **Code at glance:**
@include samples/java/tutorial_code/features2D/akaze_matching/AKAZEMatchDemo.java
@ -58,7 +60,7 @@ You can find the images (*graf1.png*, *graf3.png*) and homography (*H1to3p.xml*)
@add_toggle_python
- **Downloadable code**: Click
[here](https://raw.githubusercontent.com/opencv/opencv/5.x/samples/python/tutorial_code/features2D/akaze_matching/AKAZE_match.py)
[here](https://github.com/opencv/opencv/5.x/samples/python/tutorial_code/features2D/akaze_matching/AKAZE_match.py)
- **Code at glance:**
@include samples/python/tutorial_code/features2D/akaze_matching/AKAZE_match.py

View File

@ -17,6 +17,8 @@ Introduction
In this tutorial we will compare *AKAZE* and *ORB* local features using them to find matches between
video frames and track object movements.
\warning You need the [OpenCV contrib module *xfeatures2d*](https://github.com/opencv/opencv_contrib/tree/5.x/modules/xfeatures2d) to be able to use the AKAZE features.
The algorithm is as follows:
- Detect and describe keypoints on the first frame, manually set object boundaries

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,284 +0,0 @@
#!/usr/bin/perl
use strict;
use warnings;
use autodie; # die if problem reading or writing a file
my $filein = "./agast.txt";
my $fileout = "./agast_new.txt";
my $i1=1;
my $i2=1;
my $i3=1;
my $tmp;
my $ifcount0=0;
my $ifcount1=0;
my $ifcount2=0;
my $ifcount3=0;
my $ifcount4=0;
my $elsecount;
my $myfirstline = $ARGV[0];
my $mylastline = $ARGV[1];
my $tablename = $ARGV[2];
my @array0 = ();
my @array1 = ();
my @array2 = ();
my @array3 = ();
my $homogeneous;
my $success_homogeneous;
my $structured;
my $success_structured;
open(my $in1, "<", $filein) or die "Can't open $filein: $!";
open(my $out, ">", $fileout) or die "Can't open $fileout: $!";
$array0[0] = 0;
$i1=1;
while (my $line1 = <$in1>)
{
chomp $line1;
$array0[$i1] = 0;
if (($i1>=$myfirstline)&&($i1<=$mylastline))
{
if($line1=~/if\(ptr\[offset(\d+)/)
{
if($line1=~/if\(ptr\[offset(\d+).*\>.*cb/)
{
$tmp=$1;
}
else
{
if($line1=~/if\(ptr\[offset(\d+).*\<.*c\_b/)
{
$tmp=$1+128;
}
else
{
die "invalid array index!"
}
}
$array1[$ifcount1] = $tmp;
$array0[$ifcount1] = $i1;
$ifcount1++;
}
else
{
}
}
$i1++;
}
$homogeneous=$ifcount1;
$success_homogeneous=$ifcount1+1;
$structured=$ifcount1+2;
$success_structured=$ifcount1+3;
close $in1 or die "Can't close $filein: $!";
open($in1, "<", $filein) or die "Can't open $filein: $!";
$i1=1;
while (my $line1 = <$in1>)
{
chomp $line1;
if (($i1>=$myfirstline)&&($i1<=$mylastline))
{
if ($array0[$ifcount2] == $i1)
{
$array2[$ifcount2]=0;
$array3[$ifcount2]=0;
if ($array0[$ifcount2+1] == ($i1+1))
{
$array2[$ifcount2]=($ifcount2+1);
}
else
{
open(my $in2, "<", $filein) or die "Can't open $filein: $!";
$i2=1;
while (my $line2 = <$in2>)
{
chomp $line2;
if ($i2 == $i1)
{
last;
}
$i2++;
}
my $line2 = <$in2>;
chomp $line2;
if ($line2=~/goto (\w+)/)
{
$tmp=$1;
if ($tmp eq "homogeneous")
{
$array2[$ifcount2]=$homogeneous;
}
if ($tmp eq "success_homogeneous")
{
$array2[$ifcount2]=$success_homogeneous;
}
if ($tmp eq "structured")
{
$array2[$ifcount2]=$structured;
}
if ($tmp eq "success_structured")
{
$array2[$ifcount2]=$success_structured;
}
}
else
{
die "goto expected: $!";
}
close $in2 or die "Can't close $filein: $!";
}
#find next else and interpret it
open(my $in3, "<", $filein) or die "Can't open $filein: $!";
$i3=1;
$ifcount3=0;
$elsecount=0;
while (my $line3 = <$in3>)
{
chomp $line3;
$i3++;
if ($i3 == $i1)
{
last;
}
}
while (my $line3 = <$in3>)
{
chomp $line3;
$ifcount3++;
if (($elsecount==0)&&($i3>$i1))
{
if ($line3=~/goto (\w+)/)
{
$tmp=$1;
if ($tmp eq "homogeneous")
{
$array3[$ifcount2]=$homogeneous;
}
if ($tmp eq "success_homogeneous")
{
$array3[$ifcount2]=$success_homogeneous;
}
if ($tmp eq "structured")
{
$array3[$ifcount2]=$structured;
}
if ($tmp eq "success_structured")
{
$array3[$ifcount2]=$success_structured;
}
}
else
{
if ($line3=~/if\(ptr\[offset/)
{
$ifcount4=0;
while ($array0[$ifcount4]!=$i3)
{
$ifcount4++;
if ($ifcount4==$ifcount1)
{
die "if else match expected: $!";
}
$array3[$ifcount2]=$ifcount4;
}
}
else
{
die "elseif or elsegoto match expected: $!";
}
}
last;
}
else
{
if ($line3=~/if\(ptr\[offset/)
{
$elsecount++;
}
else
{
if ($line3=~/else/)
{
$elsecount--;
}
}
}
$i3++;
}
printf("%3d [%3d][0x%08x]\n", $array0[$ifcount2], $ifcount2, (($array1[$ifcount2]&15)<<28)|($array2[$ifcount2]<<16)|(($array1[$ifcount2]&128)<<5)|($array3[$ifcount2]));
close $in3 or die "Can't close $filein: $!";
$ifcount2++;
}
else
{
}
}
$i1++;
}
printf(" [%3d][0x%08x]\n", $homogeneous, 252);
printf(" [%3d][0x%08x]\n", $success_homogeneous, 253);
printf(" [%3d][0x%08x]\n", $structured, 254);
printf(" [%3d][0x%08x]\n", $success_structured, 255);
close $in1 or die "Can't close $filein: $!";
$ifcount0=0;
$ifcount2=0;
printf $out " static const unsigned long %s[] = {\n ", $tablename;
while ($ifcount0 < $ifcount1)
{
printf $out "0x%08x, ", (($array1[$ifcount0]&15)<<28)|($array2[$ifcount0]<<16)|(($array1[$ifcount0]&128)<<5)|($array3[$ifcount0]);
$ifcount0++;
$ifcount2++;
if ($ifcount2==8)
{
$ifcount2=0;
printf $out "\n";
printf $out " ";
}
}
printf $out "0x%08x, ", 252;
$ifcount0++;
$ifcount2++;
if ($ifcount2==8)
{
$ifcount2=0;
printf $out "\n";
printf $out " ";
}
printf $out "0x%08x, ", 253;
$ifcount0++;
$ifcount2++;
if ($ifcount2==8)
{
$ifcount2=0;
printf $out "\n";
printf $out " ";
}
printf $out "0x%08x, ", 254;
$ifcount0++;
$ifcount2++;
if ($ifcount2==8)
{
$ifcount2=0;
printf $out "\n";
printf $out " ";
}
printf $out "0x%08x\n", 255;
$ifcount0++;
$ifcount2++;
printf $out " };\n\n";
$#array0 = -1;
$#array1 = -1;
$#array2 = -1;
$#array3 = -1;
close $out or die "Can't close $fileout: $!";

View File

@ -1,244 +0,0 @@
#!/usr/bin/perl
use strict;
use warnings;
use autodie; # die if problem reading or writing a file
my $filein = "./agast_score.txt";
my $fileout = "./agast_new.txt";
my $i1=1;
my $i2=1;
my $i3=1;
my $tmp;
my $ifcount0=0;
my $ifcount1=0;
my $ifcount2=0;
my $ifcount3=0;
my $ifcount4=0;
my $elsecount;
my $myfirstline = $ARGV[0];
my $mylastline = $ARGV[1];
my $tablename = $ARGV[2];
my @array0 = ();
my @array1 = ();
my @array2 = ();
my @array3 = ();
my $is_not_a_corner;
my $is_a_corner;
open(my $in1, "<", $filein) or die "Can't open $filein: $!";
open(my $out, ">", $fileout) or die "Can't open $fileout: $!";
$array0[0] = 0;
$i1=1;
while (my $line1 = <$in1>)
{
chomp $line1;
$array0[$i1] = 0;
if (($i1>=$myfirstline)&&($i1<=$mylastline))
{
if($line1=~/if\(ptr\[offset(\d+)/)
{
if($line1=~/if\(ptr\[offset(\d+).*\>.*cb/)
{
$tmp=$1;
}
else
{
if($line1=~/if\(ptr\[offset(\d+).*\<.*c\_b/)
{
$tmp=$1+128;
}
else
{
die "invalid array index!"
}
}
$array1[$ifcount1] = $tmp;
$array0[$ifcount1] = $i1;
$ifcount1++;
}
else
{
}
}
$i1++;
}
$is_not_a_corner=$ifcount1;
$is_a_corner=$ifcount1+1;
close $in1 or die "Can't close $filein: $!";
open($in1, "<", $filein) or die "Can't open $filein: $!";
$i1=1;
while (my $line1 = <$in1>)
{
chomp $line1;
if (($i1>=$myfirstline)&&($i1<=$mylastline))
{
if ($array0[$ifcount2] == $i1)
{
$array2[$ifcount2]=0;
$array3[$ifcount2]=0;
if ($array0[$ifcount2+1] == ($i1+1))
{
$array2[$ifcount2]=($ifcount2+1);
}
else
{
open(my $in2, "<", $filein) or die "Can't open $filein: $!";
$i2=1;
while (my $line2 = <$in2>)
{
chomp $line2;
if ($i2 == $i1)
{
last;
}
$i2++;
}
my $line2 = <$in2>;
chomp $line2;
if ($line2=~/goto (\w+)/)
{
$tmp=$1;
if ($tmp eq "is_not_a_corner")
{
$array2[$ifcount2]=$is_not_a_corner;
}
if ($tmp eq "is_a_corner")
{
$array2[$ifcount2]=$is_a_corner;
}
}
else
{
die "goto expected: $!";
}
close $in2 or die "Can't close $filein: $!";
}
#find next else and interpret it
open(my $in3, "<", $filein) or die "Can't open $filein: $!";
$i3=1;
$ifcount3=0;
$elsecount=0;
while (my $line3 = <$in3>)
{
chomp $line3;
$i3++;
if ($i3 == $i1)
{
last;
}
}
while (my $line3 = <$in3>)
{
chomp $line3;
$ifcount3++;
if (($elsecount==0)&&($i3>$i1))
{
if ($line3=~/goto (\w+)/)
{
$tmp=$1;
if ($tmp eq "is_not_a_corner")
{
$array3[$ifcount2]=$is_not_a_corner;
}
if ($tmp eq "is_a_corner")
{
$array3[$ifcount2]=$is_a_corner;
}
}
else
{
if ($line3=~/if\(ptr\[offset/)
{
$ifcount4=0;
while ($array0[$ifcount4]!=$i3)
{
$ifcount4++;
if ($ifcount4==$ifcount1)
{
die "if else match expected: $!";
}
$array3[$ifcount2]=$ifcount4;
}
}
else
{
die "elseif or elsegoto match expected: $!";
}
}
last;
}
else
{
if ($line3=~/if\(ptr\[offset/)
{
$elsecount++;
}
else
{
if ($line3=~/else/)
{
$elsecount--;
}
}
}
$i3++;
}
printf("%3d [%3d][0x%08x]\n", $array0[$ifcount2], $ifcount2, (($array1[$ifcount2]&15)<<28)|($array2[$ifcount2]<<16)|(($array1[$ifcount2]&128)<<5)|($array3[$ifcount2]));
close $in3 or die "Can't close $filein: $!";
$ifcount2++;
}
else
{
}
}
$i1++;
}
printf(" [%3d][0x%08x]\n", $is_not_a_corner, 254);
printf(" [%3d][0x%08x]\n", $is_a_corner, 255);
close $in1 or die "Can't close $filein: $!";
$ifcount0=0;
$ifcount2=0;
printf $out " static const unsigned long %s[] = {\n ", $tablename;
while ($ifcount0 < $ifcount1)
{
printf $out "0x%08x, ", (($array1[$ifcount0]&15)<<28)|($array2[$ifcount0]<<16)|(($array1[$ifcount0]&128)<<5)|($array3[$ifcount0]);
$ifcount0++;
$ifcount2++;
if ($ifcount2==8)
{
$ifcount2=0;
printf $out "\n";
printf $out " ";
}
}
printf $out "0x%08x, ", 254;
$ifcount0++;
$ifcount2++;
if ($ifcount2==8)
{
$ifcount2=0;
printf $out "\n";
printf $out " ";
}
printf $out "0x%08x\n", 255;
$ifcount0++;
$ifcount2++;
printf $out " };\n\n";
$#array0 = -1;
$#array1 = -1;
$#array2 = -1;
$#array3 = -1;
close $out or die "Can't close $fileout: $!";

View File

@ -1,32 +0,0 @@
perl read_file_score32.pl 9059 9385 table_5_8_corner_struct
move agast_new.txt agast_score_table.txt
perl read_file_score32.pl 2215 3387 table_7_12d_corner_struct
copy /A agast_score_table.txt + agast_new.txt agast_score_table.txt
del agast_new.txt
perl read_file_score32.pl 3428 9022 table_7_12s_corner_struct
copy /A agast_score_table.txt + agast_new.txt agast_score_table.txt
del agast_new.txt
perl read_file_score32.pl 118 2174 table_9_16_corner_struct
copy /A agast_score_table.txt + agast_new.txt agast_score_table.txt
del agast_new.txt
perl read_file_nondiff32.pl 103 430 table_5_8_struct1
move agast_new.txt agast_table.txt
perl read_file_nondiff32.pl 440 779 table_5_8_struct2
copy /A agast_table.txt + agast_new.txt agast_table.txt
del agast_new.txt
perl read_file_nondiff32.pl 869 2042 table_7_12d_struct1
copy /A agast_table.txt + agast_new.txt agast_table.txt
del agast_new.txt
perl read_file_nondiff32.pl 2052 3225 table_7_12d_struct2
copy /A agast_table.txt + agast_new.txt agast_table.txt
del agast_new.txt
perl read_file_nondiff32.pl 3315 4344 table_7_12s_struct1
copy /A agast_table.txt + agast_new.txt agast_table.txt
del agast_new.txt
perl read_file_nondiff32.pl 4354 5308 table_7_12s_struct2
copy /A agast_table.txt + agast_new.txt agast_table.txt
del agast_new.txt
perl read_file_nondiff32.pl 5400 7454 table_9_16_struct
copy /A agast_table.txt + agast_new.txt agast_table.txt
del agast_new.txt

View File

@ -62,9 +62,6 @@
All objects that implement vector descriptor matchers inherit the DescriptorMatcher interface.
@defgroup features2d_draw Drawing Function of Keypoints and Matches
@defgroup features2d_category Object Categorization
This section describes approaches based on local 2D features and used to categorize objects.
@defgroup feature2d_hal Hardware Acceleration Layer
@{
@ -341,71 +338,6 @@ typedef SIFT SiftFeatureDetector;
typedef SIFT SiftDescriptorExtractor;
/** @brief Class implementing the BRISK keypoint detector and descriptor extractor, described in @cite LCS11 .
*/
class CV_EXPORTS_W BRISK : public Feature2D
{
public:
/** @brief The BRISK constructor
@param thresh AGAST detection threshold score.
@param octaves detection octaves. Use 0 to do single scale.
@param patternScale apply this scale to the pattern used for sampling the neighbourhood of a
keypoint.
*/
CV_WRAP static Ptr<BRISK> create(int thresh=30, int octaves=3, float patternScale=1.0f);
/** @brief The BRISK constructor for a custom pattern
@param radiusList defines the radii (in pixels) where the samples around a keypoint are taken (for
keypoint scale 1).
@param numberList defines the number of sampling points on the sampling circle. Must be the same
size as radiusList..
@param dMax threshold for the short pairings used for descriptor formation (in pixels for keypoint
scale 1).
@param dMin threshold for the long pairings used for orientation determination (in pixels for
keypoint scale 1).
@param indexChange index remapping of the bits. */
CV_WRAP static Ptr<BRISK> create(const std::vector<float> &radiusList, const std::vector<int> &numberList,
float dMax=5.85f, float dMin=8.2f, const std::vector<int>& indexChange=std::vector<int>());
/** @brief The BRISK constructor for a custom pattern, detection threshold and octaves
@param thresh AGAST detection threshold score.
@param octaves detection octaves. Use 0 to do single scale.
@param radiusList defines the radii (in pixels) where the samples around a keypoint are taken (for
keypoint scale 1).
@param numberList defines the number of sampling points on the sampling circle. Must be the same
size as radiusList..
@param dMax threshold for the short pairings used for descriptor formation (in pixels for keypoint
scale 1).
@param dMin threshold for the long pairings used for orientation determination (in pixels for
keypoint scale 1).
@param indexChange index remapping of the bits. */
CV_WRAP static Ptr<BRISK> create(int thresh, int octaves, const std::vector<float> &radiusList,
const std::vector<int> &numberList, float dMax=5.85f, float dMin=8.2f,
const std::vector<int>& indexChange=std::vector<int>());
CV_WRAP virtual String getDefaultName() const CV_OVERRIDE;
/** @brief Set detection threshold.
@param threshold AGAST detection threshold score.
*/
CV_WRAP virtual void setThreshold(int threshold) = 0;
CV_WRAP virtual int getThreshold() const = 0;
/** @brief Set detection octaves.
@param octaves detection octaves. Use 0 to do single scale.
*/
CV_WRAP virtual void setOctaves(int octaves) = 0;
CV_WRAP virtual int getOctaves() const = 0;
/** @brief Set detection patternScale.
@param patternScale apply this scale to the pattern used for sampling the neighbourhood of a
keypoint.
*/
CV_WRAP virtual void setPatternScale(float patternScale) = 0;
CV_WRAP virtual float getPatternScale() const = 0;
};
/** @brief Class implementing the ORB (*oriented BRIEF*) keypoint detector and descriptor extractor
described in @cite RRKB11 . The algorithm uses FAST in pyramids to detect stable keypoints, selects
@ -616,56 +548,6 @@ CV_EXPORTS void FAST( InputArray image, CV_OUT std::vector<KeyPoint>& keypoints,
int threshold, bool nonmaxSuppression=true, FastFeatureDetector::DetectorType type=FastFeatureDetector::TYPE_9_16 );
/** @brief Wrapping class for feature detection using the AGAST method. :
*/
class CV_EXPORTS_W AgastFeatureDetector : public Feature2D
{
public:
enum DetectorType
{
AGAST_5_8 = 0, AGAST_7_12d = 1, AGAST_7_12s = 2, OAST_9_16 = 3,
};
enum
{
THRESHOLD = 10000, NONMAX_SUPPRESSION = 10001,
};
CV_WRAP static Ptr<AgastFeatureDetector> create( int threshold=10,
bool nonmaxSuppression=true,
AgastFeatureDetector::DetectorType type = AgastFeatureDetector::OAST_9_16);
CV_WRAP virtual void setThreshold(int threshold) = 0;
CV_WRAP virtual int getThreshold() const = 0;
CV_WRAP virtual void setNonmaxSuppression(bool f) = 0;
CV_WRAP virtual bool getNonmaxSuppression() const = 0;
CV_WRAP virtual void setType(AgastFeatureDetector::DetectorType type) = 0;
CV_WRAP virtual AgastFeatureDetector::DetectorType getType() const = 0;
CV_WRAP virtual String getDefaultName() const CV_OVERRIDE;
};
/** @brief Detects corners using the AGAST algorithm
@param image grayscale image where keypoints (corners) are detected.
@param keypoints keypoints detected on the image.
@param threshold threshold on difference between intensity of the central pixel and pixels of a
circle around this pixel.
@param nonmaxSuppression if true, non-maximum suppression is applied to detected keypoints (corners).
@param type one of the four neighborhoods as defined in the paper:
AgastFeatureDetector::AGAST_5_8, AgastFeatureDetector::AGAST_7_12d,
AgastFeatureDetector::AGAST_7_12s, AgastFeatureDetector::OAST_9_16
For non-Intel platforms, there is a tree optimised variant of AGAST with same numerical results.
The 32-bit binary tree tables were generated automatically from original code using perl script.
The perl script and examples of tree generation are placed in features2d/doc folder.
Detects corners using the AGAST algorithm by @cite mair2010_agast .
*/
CV_EXPORTS void AGAST( InputArray image, CV_OUT std::vector<KeyPoint>& keypoints,
int threshold, bool nonmaxSuppression=true, AgastFeatureDetector::DetectorType type=AgastFeatureDetector::OAST_9_16 );
/** @brief Wrapping class for feature detection using the goodFeaturesToTrack function. :
*/
class CV_EXPORTS_W GFTTDetector : public Feature2D
@ -773,134 +655,6 @@ public:
};
/** @brief Class implementing the KAZE keypoint detector and descriptor extractor, described in @cite ABD12 .
@note AKAZE descriptor can only be used with KAZE or AKAZE keypoints .. [ABD12] KAZE Features. Pablo
F. Alcantarilla, Adrien Bartoli and Andrew J. Davison. In European Conference on Computer Vision
(ECCV), Fiorenze, Italy, October 2012.
*/
class CV_EXPORTS_W KAZE : public Feature2D
{
public:
enum DiffusivityType
{
DIFF_PM_G1 = 0,
DIFF_PM_G2 = 1,
DIFF_WEICKERT = 2,
DIFF_CHARBONNIER = 3
};
/** @brief The KAZE constructor
@param extended Set to enable extraction of extended (128-byte) descriptor.
@param upright Set to enable use of upright descriptors (non rotation-invariant).
@param threshold Detector response threshold to accept point
@param nOctaves Maximum octave evolution of the image
@param nOctaveLayers Default number of sublevels per scale level
@param diffusivity Diffusivity type. DIFF_PM_G1, DIFF_PM_G2, DIFF_WEICKERT or
DIFF_CHARBONNIER
*/
CV_WRAP static Ptr<KAZE> create(bool extended=false, bool upright=false,
float threshold = 0.001f,
int nOctaves = 4, int nOctaveLayers = 4,
KAZE::DiffusivityType diffusivity = KAZE::DIFF_PM_G2);
CV_WRAP virtual void setExtended(bool extended) = 0;
CV_WRAP virtual bool getExtended() const = 0;
CV_WRAP virtual void setUpright(bool upright) = 0;
CV_WRAP virtual bool getUpright() const = 0;
CV_WRAP virtual void setThreshold(double threshold) = 0;
CV_WRAP virtual double getThreshold() const = 0;
CV_WRAP virtual void setNOctaves(int octaves) = 0;
CV_WRAP virtual int getNOctaves() const = 0;
CV_WRAP virtual void setNOctaveLayers(int octaveLayers) = 0;
CV_WRAP virtual int getNOctaveLayers() const = 0;
CV_WRAP virtual void setDiffusivity(KAZE::DiffusivityType diff) = 0;
CV_WRAP virtual KAZE::DiffusivityType getDiffusivity() const = 0;
CV_WRAP virtual String getDefaultName() const CV_OVERRIDE;
};
/** @brief Class implementing the AKAZE keypoint detector and descriptor extractor, described in @cite ANB13.
@details AKAZE descriptors can only be used with KAZE or AKAZE keypoints. This class is thread-safe.
@note When you need descriptors use Feature2D::detectAndCompute, which
provides better performance. When using Feature2D::detect followed by
Feature2D::compute scale space pyramid is computed twice.
@note AKAZE implements T-API. When image is passed as UMat some parts of the algorithm
will use OpenCL.
@note [ANB13] Fast Explicit Diffusion for Accelerated Features in Nonlinear
Scale Spaces. Pablo F. Alcantarilla, Jesús Nuevo and Adrien Bartoli. In
British Machine Vision Conference (BMVC), Bristol, UK, September 2013.
*/
class CV_EXPORTS_W AKAZE : public Feature2D
{
public:
// AKAZE descriptor type
enum DescriptorType
{
DESCRIPTOR_KAZE_UPRIGHT = 2, ///< Upright descriptors, not invariant to rotation
DESCRIPTOR_KAZE = 3,
DESCRIPTOR_MLDB_UPRIGHT = 4, ///< Upright descriptors, not invariant to rotation
DESCRIPTOR_MLDB = 5
};
/** @brief The AKAZE constructor
@param descriptor_type Type of the extracted descriptor: DESCRIPTOR_KAZE,
DESCRIPTOR_KAZE_UPRIGHT, DESCRIPTOR_MLDB or DESCRIPTOR_MLDB_UPRIGHT.
@param descriptor_size Size of the descriptor in bits. 0 -\> Full size
@param descriptor_channels Number of channels in the descriptor (1, 2, 3)
@param threshold Detector response threshold to accept point
@param nOctaves Maximum octave evolution of the image
@param nOctaveLayers Default number of sublevels per scale level
@param diffusivity Diffusivity type. DIFF_PM_G1, DIFF_PM_G2, DIFF_WEICKERT or
DIFF_CHARBONNIER
@param max_points Maximum amount of returned points. In case if image contains
more features, then the features with highest response are returned.
Negative value means no limitation.
*/
CV_WRAP static Ptr<AKAZE> create(AKAZE::DescriptorType descriptor_type = AKAZE::DESCRIPTOR_MLDB,
int descriptor_size = 0, int descriptor_channels = 3,
float threshold = 0.001f, int nOctaves = 4,
int nOctaveLayers = 4, KAZE::DiffusivityType diffusivity = KAZE::DIFF_PM_G2,
int max_points = -1);
CV_WRAP virtual void setDescriptorType(AKAZE::DescriptorType dtype) = 0;
CV_WRAP virtual AKAZE::DescriptorType getDescriptorType() const = 0;
CV_WRAP virtual void setDescriptorSize(int dsize) = 0;
CV_WRAP virtual int getDescriptorSize() const = 0;
CV_WRAP virtual void setDescriptorChannels(int dch) = 0;
CV_WRAP virtual int getDescriptorChannels() const = 0;
CV_WRAP virtual void setThreshold(double threshold) = 0;
CV_WRAP virtual double getThreshold() const = 0;
CV_WRAP virtual void setNOctaves(int octaves) = 0;
CV_WRAP virtual int getNOctaves() const = 0;
CV_WRAP virtual void setNOctaveLayers(int octaveLayers) = 0;
CV_WRAP virtual int getNOctaveLayers() const = 0;
CV_WRAP virtual void setDiffusivity(KAZE::DiffusivityType diff) = 0;
CV_WRAP virtual KAZE::DiffusivityType getDiffusivity() const = 0;
CV_WRAP virtual String getDefaultName() const CV_OVERRIDE;
CV_WRAP virtual void setMaxPoints(int max_points) = 0;
CV_WRAP virtual int getMaxPoints() const = 0;
};
/****************************************************************************************\
* Distance *
\****************************************************************************************/
@ -1424,165 +1178,6 @@ CV_EXPORTS int getNearestPoint( const std::vector<Point2f>& recallPrecisionCurve
//! @}
/****************************************************************************************\
* Bag of visual words *
\****************************************************************************************/
//! @addtogroup features2d_category
//! @{
/** @brief Abstract base class for training the *bag of visual words* vocabulary from a set of descriptors.
For details, see, for example, *Visual Categorization with Bags of Keypoints* by Gabriella Csurka,
Christopher R. Dance, Lixin Fan, Jutta Willamowski, Cedric Bray, 2004. :
*/
class CV_EXPORTS_W BOWTrainer
{
public:
BOWTrainer();
virtual ~BOWTrainer();
/** @brief Adds descriptors to a training set.
@param descriptors Descriptors to add to a training set. Each row of the descriptors matrix is a
descriptor.
The training set is clustered using clustermethod to construct the vocabulary.
*/
CV_WRAP void add( const Mat& descriptors );
/** @brief Returns a training set of descriptors.
*/
CV_WRAP const std::vector<Mat>& getDescriptors() const;
/** @brief Returns the count of all descriptors stored in the training set.
*/
CV_WRAP int descriptorsCount() const;
CV_WRAP virtual void clear();
/** @overload */
CV_WRAP virtual Mat cluster() const = 0;
/** @brief Clusters train descriptors.
@param descriptors Descriptors to cluster. Each row of the descriptors matrix is a descriptor.
Descriptors are not added to the inner train descriptor set.
The vocabulary consists of cluster centers. So, this method returns the vocabulary. In the first
variant of the method, train descriptors stored in the object are clustered. In the second variant,
input descriptors are clustered.
*/
CV_WRAP virtual Mat cluster( const Mat& descriptors ) const = 0;
protected:
std::vector<Mat> descriptors;
int size;
};
/** @brief kmeans -based class to train visual vocabulary using the *bag of visual words* approach. :
*/
class CV_EXPORTS_W BOWKMeansTrainer : public BOWTrainer
{
public:
/** @brief The constructor.
@see cv::kmeans
*/
CV_WRAP BOWKMeansTrainer( int clusterCount, const TermCriteria& termcrit=TermCriteria(),
int attempts=3, int flags=KMEANS_PP_CENTERS );
virtual ~BOWKMeansTrainer();
// Returns trained vocabulary (i.e. cluster centers).
CV_WRAP virtual Mat cluster() const CV_OVERRIDE;
CV_WRAP virtual Mat cluster( const Mat& descriptors ) const CV_OVERRIDE;
protected:
int clusterCount;
TermCriteria termcrit;
int attempts;
int flags;
};
/** @brief Class to compute an image descriptor using the *bag of visual words*.
Such a computation consists of the following steps:
1. Compute descriptors for a given image and its keypoints set.
2. Find the nearest visual words from the vocabulary for each keypoint descriptor.
3. Compute the bag-of-words image descriptor as is a normalized histogram of vocabulary words
encountered in the image. The i-th bin of the histogram is a frequency of i-th word of the
vocabulary in the given image.
*/
class CV_EXPORTS_W BOWImgDescriptorExtractor
{
public:
/** @brief The constructor.
@param dextractor Descriptor extractor that is used to compute descriptors for an input image and
its keypoints.
@param dmatcher Descriptor matcher that is used to find the nearest word of the trained vocabulary
for each keypoint descriptor of the image.
*/
CV_WRAP BOWImgDescriptorExtractor( const Ptr<Feature2D>& dextractor,
const Ptr<DescriptorMatcher>& dmatcher );
/** @overload */
BOWImgDescriptorExtractor( const Ptr<DescriptorMatcher>& dmatcher );
virtual ~BOWImgDescriptorExtractor();
/** @brief Sets a visual vocabulary.
@param vocabulary Vocabulary (can be trained using the inheritor of BOWTrainer ). Each row of the
vocabulary is a visual word (cluster center).
*/
CV_WRAP void setVocabulary( const Mat& vocabulary );
/** @brief Returns the set vocabulary.
*/
CV_WRAP const Mat& getVocabulary() const;
/** @brief Computes an image descriptor using the set visual vocabulary.
@param image Image, for which the descriptor is computed.
@param keypoints Keypoints detected in the input image.
@param imgDescriptor Computed output image descriptor.
@param pointIdxsOfClusters Indices of keypoints that belong to the cluster. This means that
pointIdxsOfClusters[i] are keypoint indices that belong to the i -th cluster (word of vocabulary)
returned if it is non-zero.
@param descriptors Descriptors of the image keypoints that are returned if they are non-zero.
*/
void compute( InputArray image, std::vector<KeyPoint>& keypoints, OutputArray imgDescriptor,
std::vector<std::vector<int> >* pointIdxsOfClusters=0, Mat* descriptors=0 );
/** @overload
@param keypointDescriptors Computed descriptors to match with vocabulary.
@param imgDescriptor Computed output image descriptor.
@param pointIdxsOfClusters Indices of keypoints that belong to the cluster. This means that
pointIdxsOfClusters[i] are keypoint indices that belong to the i -th cluster (word of vocabulary)
returned if it is non-zero.
*/
void compute( InputArray keypointDescriptors, OutputArray imgDescriptor,
std::vector<std::vector<int> >* pointIdxsOfClusters=0 );
// compute() is not constant because DescriptorMatcher::match is not constant
CV_WRAP_AS(compute) void compute2( const Mat& image, std::vector<KeyPoint>& keypoints, CV_OUT Mat& imgDescriptor )
{ compute(image,keypoints,imgDescriptor); }
/** @brief Returns an image descriptor size if the vocabulary is set. Otherwise, it returns 0.
*/
CV_WRAP int descriptorSize() const;
/** @brief Returns an image descriptor type.
*/
CV_WRAP int descriptorType() const;
protected:
Mat vocabulary;
Ptr<DescriptorExtractor> dextractor;
Ptr<DescriptorMatcher> dmatcher;
};
//! @} features2d_category
} /* namespace cv */

View File

@ -1,85 +0,0 @@
package org.opencv.test.features2d;
import org.opencv.test.OpenCVTestCase;
import org.opencv.test.OpenCVTestRunner;
import org.opencv.features2d.AgastFeatureDetector;
public class AGASTFeatureDetectorTest extends OpenCVTestCase {
AgastFeatureDetector detector;
@Override
protected void setUp() throws Exception {
super.setUp();
detector = AgastFeatureDetector.create(); // default (10,true,3)
}
public void testCreate() {
assertNotNull(detector);
}
public void testDetectListOfMatListOfListOfKeyPoint() {
fail("Not yet implemented");
}
public void testDetectListOfMatListOfListOfKeyPointListOfMat() {
fail("Not yet implemented");
}
public void testDetectMatListOfKeyPoint() {
fail("Not yet implemented");
}
public void testDetectMatListOfKeyPointMat() {
fail("Not yet implemented");
}
public void testEmpty() {
fail("Not yet implemented");
}
public void testRead() {
String filename = OpenCVTestRunner.getTempFileName("xml");
writeFile(filename, "<?xml version=\"1.0\"?>\n<opencv_storage>\n<name>Feature2D.AgastFeatureDetector</name>\n<threshold>11</threshold>\n<nonmaxSuppression>0</nonmaxSuppression>\n<type>2</type>\n</opencv_storage>\n");
detector.read(filename);
assertEquals(11, detector.getThreshold());
assertEquals(false, detector.getNonmaxSuppression());
assertEquals(2, detector.getType());
}
public void testReadYml() {
String filename = OpenCVTestRunner.getTempFileName("yml");
writeFile(filename, "%YAML:1.0\n---\nname: \"Feature2D.AgastFeatureDetector\"\nthreshold: 11\nnonmaxSuppression: 0\ntype: 2\n");
detector.read(filename);
assertEquals(11, detector.getThreshold());
assertEquals(false, detector.getNonmaxSuppression());
assertEquals(2, detector.getType());
}
public void testWrite() {
String filename = OpenCVTestRunner.getTempFileName("xml");
detector.write(filename);
String truth = "<?xml version=\"1.0\"?>\n<opencv_storage>\n<name>Feature2D.AgastFeatureDetector</name>\n<threshold>10</threshold>\n<nonmaxSuppression>1</nonmaxSuppression>\n<type>3</type>\n</opencv_storage>\n";
String actual = readFile(filename);
actual = actual.replaceAll("e([+-])0(\\d\\d)", "e$1$2"); // NOTE: workaround for different platforms double representation
assertEquals(truth, actual);
}
public void testWriteYml() {
String filename = OpenCVTestRunner.getTempFileName("yml");
detector.write(filename);
String truth = "%YAML:1.0\n---\nname: \"Feature2D.AgastFeatureDetector\"\nthreshold: 10\nnonmaxSuppression: 1\ntype: 3\n";
String actual = readFile(filename);
actual = actual.replaceAll("e([+-])0(\\d\\d)", "e$1$2"); // NOTE: workaround for different platforms double representation
assertEquals(truth, actual);
}
}

View File

@ -1,67 +0,0 @@
package org.opencv.test.features2d;
import org.opencv.test.OpenCVTestCase;
import org.opencv.test.OpenCVTestRunner;
import org.opencv.features2d.AKAZE;
public class AKAZEDescriptorExtractorTest extends OpenCVTestCase {
AKAZE extractor;
@Override
protected void setUp() throws Exception {
super.setUp();
extractor = AKAZE.create(); // default (5,0,3,0.001f,4,4,1)
}
public void testCreate() {
assertNotNull(extractor);
}
public void testDetectListOfMatListOfListOfKeyPoint() {
fail("Not yet implemented");
}
public void testDetectListOfMatListOfListOfKeyPointListOfMat() {
fail("Not yet implemented");
}
public void testDetectMatListOfKeyPoint() {
fail("Not yet implemented");
}
public void testDetectMatListOfKeyPointMat() {
fail("Not yet implemented");
}
public void testEmpty() {
fail("Not yet implemented");
}
public void testReadYml() {
String filename = OpenCVTestRunner.getTempFileName("yml");
writeFile(filename, "%YAML:1.0\n---\nformat: 3\nname: \"Feature2D.AKAZE\"\ndescriptor: 4\ndescriptor_channels: 2\ndescriptor_size: 32\nthreshold: 0.125\noctaves: 3\nsublevels: 5\ndiffusivity: 2\n");
extractor.read(filename);
assertEquals(4, extractor.getDescriptorType());
assertEquals(2, extractor.getDescriptorChannels());
assertEquals(32, extractor.getDescriptorSize());
assertEquals(0.125, extractor.getThreshold());
assertEquals(3, extractor.getNOctaves());
assertEquals(5, extractor.getNOctaveLayers());
assertEquals(2, extractor.getDiffusivity());
}
public void testWriteYml() {
String filename = OpenCVTestRunner.getTempFileName("yml");
extractor.write(filename);
String truth = "%YAML:1.0\n---\nformat: 3\nname: \"Feature2D.AKAZE\"\ndescriptor: 5\ndescriptor_channels: 3\ndescriptor_size: 0\nthreshold: 0.0010000000474974513\noctaves: 4\nsublevels: 4\ndiffusivity: 1\nmax_points: -1\n";
String actual = readFile(filename);
actual = actual.replaceAll("e([+-])0(\\d\\d)", "e$1$2"); // NOTE: workaround for different platforms double representation
assertEquals(truth, actual);
}
}

View File

@ -1,48 +0,0 @@
package org.opencv.test.features2d;
import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.core.MatOfKeyPoint;
import org.opencv.core.Point;
import org.opencv.core.Scalar;
import org.opencv.core.KeyPoint;
import org.opencv.features2d.ORB;
import org.opencv.features2d.DescriptorMatcher;
import org.opencv.features2d.BOWImgDescriptorExtractor;
import org.opencv.test.OpenCVTestCase;
import org.opencv.test.OpenCVTestRunner;
import org.opencv.imgproc.Imgproc;
public class BOWImgDescriptorExtractorTest extends OpenCVTestCase {
ORB extractor;
DescriptorMatcher matcher;
int matSize;
public static void assertDescriptorsClose(Mat expected, Mat actual, int allowedDistance) {
double distance = Core.norm(expected, actual, Core.NORM_HAMMING);
assertTrue("expected:<" + allowedDistance + "> but was:<" + distance + ">", distance <= allowedDistance);
}
private Mat getTestImg() {
Mat cross = new Mat(matSize, matSize, CvType.CV_8U, new Scalar(255));
Imgproc.line(cross, new Point(20, matSize / 2), new Point(matSize - 21, matSize / 2), new Scalar(100), 2);
Imgproc.line(cross, new Point(matSize / 2, 20), new Point(matSize / 2, matSize - 21), new Scalar(100), 2);
return cross;
}
@Override
protected void setUp() throws Exception {
super.setUp();
extractor = ORB.create();
matcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE);
matSize = 100;
}
public void testCreate() {
BOWImgDescriptorExtractor bow = new BOWImgDescriptorExtractor(extractor, matcher);
}
}

View File

@ -1,102 +0,0 @@
package org.opencv.test.features2d;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.core.MatOfKeyPoint;
import org.opencv.core.Point;
import org.opencv.core.Scalar;
import org.opencv.core.KeyPoint;
import org.opencv.test.OpenCVTestCase;
import org.opencv.test.OpenCVTestRunner;
import org.opencv.imgproc.Imgproc;
import org.opencv.features2d.Feature2D;
public class BRIEFDescriptorExtractorTest extends OpenCVTestCase {
Feature2D extractor;
int matSize;
private Mat getTestImg() {
Mat cross = new Mat(matSize, matSize, CvType.CV_8U, new Scalar(255));
Imgproc.line(cross, new Point(20, matSize / 2), new Point(matSize - 21, matSize / 2), new Scalar(100), 2);
Imgproc.line(cross, new Point(matSize / 2, 20), new Point(matSize / 2, matSize - 21), new Scalar(100), 2);
return cross;
}
@Override
protected void setUp() throws Exception {
super.setUp();
extractor = createClassInstance(XFEATURES2D+"BriefDescriptorExtractor", DEFAULT_FACTORY, null, null);
matSize = 100;
}
public void testComputeListOfMatListOfListOfKeyPointListOfMat() {
fail("Not yet implemented");
}
public void testComputeMatListOfKeyPointMat() {
KeyPoint point = new KeyPoint(55.775577545166016f, 44.224422454833984f, 16, 9.754629f, 8617.863f, 1, -1);
MatOfKeyPoint keypoints = new MatOfKeyPoint(point);
Mat img = getTestImg();
Mat descriptors = new Mat();
extractor.compute(img, keypoints, descriptors);
Mat truth = new Mat(1, 32, CvType.CV_8UC1) {
{
put(0, 0, 96, 0, 76, 24, 47, 182, 68, 137,
149, 195, 67, 16, 187, 224, 74, 8,
82, 169, 87, 70, 44, 4, 192, 56,
13, 128, 44, 106, 146, 72, 194, 245);
}
};
assertMatEqual(truth, descriptors);
}
public void testCreate() {
assertNotNull(extractor);
}
public void testDescriptorSize() {
assertEquals(32, extractor.descriptorSize());
}
public void testDescriptorType() {
assertEquals(CvType.CV_8U, extractor.descriptorType());
}
public void testEmpty() {
// assertFalse(extractor.empty());
fail("Not yet implemented"); // BRIEF does not override empty() method
}
public void testRead() {
String filename = OpenCVTestRunner.getTempFileName("yml");
writeFile(filename, "%YAML:1.0\n---\ndescriptorSize: 64\n");
extractor.read(filename);
assertEquals(64, extractor.descriptorSize());
}
public void testWrite() {
String filename = OpenCVTestRunner.getTempFileName("xml");
extractor.write(filename);
String truth = "<?xml version=\"1.0\"?>\n<opencv_storage>\n<name>Feature2D.BRIEF</name>\n<descriptorSize>32</descriptorSize>\n<use_orientation>0</use_orientation>\n</opencv_storage>\n";
assertEquals(truth, readFile(filename));
}
public void testWriteYml() {
String filename = OpenCVTestRunner.getTempFileName("yml");
extractor.write(filename);
String truth = "%YAML:1.0\n---\nname: \"Feature2D.BRIEF\"\ndescriptorSize: 32\nuse_orientation: 0\n";
assertEquals(truth, readFile(filename));
}
}

View File

@ -1,63 +0,0 @@
package org.opencv.test.features2d;
import org.opencv.test.OpenCVTestCase;
import org.opencv.test.OpenCVTestRunner;
import org.opencv.features2d.BRISK;
public class BRISKDescriptorExtractorTest extends OpenCVTestCase {
BRISK extractor;
@Override
protected void setUp() throws Exception {
super.setUp();
extractor = BRISK.create(); // default (30,3,1)
}
public void testCreate() {
assertNotNull(extractor);
}
public void testDetectListOfMatListOfListOfKeyPoint() {
fail("Not yet implemented");
}
public void testDetectListOfMatListOfListOfKeyPointListOfMat() {
fail("Not yet implemented");
}
public void testDetectMatListOfKeyPoint() {
fail("Not yet implemented");
}
public void testDetectMatListOfKeyPointMat() {
fail("Not yet implemented");
}
public void testEmpty() {
fail("Not yet implemented");
}
public void testReadYml() {
String filename = OpenCVTestRunner.getTempFileName("yml");
writeFile(filename, "%YAML:1.0\n---\nname: \"Feature2D.BRISK\"\nthreshold: 31\noctaves: 4\npatternScale: 1.1\n");
extractor.read(filename);
assertEquals(31, extractor.getThreshold());
assertEquals(4, extractor.getOctaves());
assertEquals(1.1f, extractor.getPatternScale());
}
public void testWriteYml() {
String filename = OpenCVTestRunner.getTempFileName("yml");
extractor.write(filename);
String truth = "%YAML:1.0\n---\nname: \"Feature2D.BRISK\"\nthreshold: 30\noctaves: 3\npatternScale: 1.\n";
String actual = readFile(filename);
actual = actual.replaceAll("e([+-])0(\\d\\d)", "e$1$2"); // NOTE: workaround for different platforms double representation
assertEquals(truth, actual);
}
}

View File

@ -1,66 +0,0 @@
package org.opencv.test.features2d;
import org.opencv.test.OpenCVTestCase;
import org.opencv.test.OpenCVTestRunner;
import org.opencv.features2d.KAZE;
public class KAZEDescriptorExtractorTest extends OpenCVTestCase {
KAZE extractor;
@Override
protected void setUp() throws Exception {
super.setUp();
extractor = KAZE.create(); // default (false,false,0.001f,4,4,1)
}
public void testCreate() {
assertNotNull(extractor);
}
public void testDetectListOfMatListOfListOfKeyPoint() {
fail("Not yet implemented");
}
public void testDetectListOfMatListOfListOfKeyPointListOfMat() {
fail("Not yet implemented");
}
public void testDetectMatListOfKeyPoint() {
fail("Not yet implemented");
}
public void testDetectMatListOfKeyPointMat() {
fail("Not yet implemented");
}
public void testEmpty() {
fail("Not yet implemented");
}
public void testReadYml() {
String filename = OpenCVTestRunner.getTempFileName("yml");
writeFile(filename, "%YAML:1.0\n---\nformat: 3\nname: \"Feature2D.KAZE\"\nextended: 1\nupright: 1\nthreshold: 0.125\noctaves: 3\nsublevels: 5\ndiffusivity: 2\n");
extractor.read(filename);
assertEquals(true, extractor.getExtended());
assertEquals(true, extractor.getUpright());
assertEquals(0.125, extractor.getThreshold());
assertEquals(3, extractor.getNOctaves());
assertEquals(5, extractor.getNOctaveLayers());
assertEquals(2, extractor.getDiffusivity());
}
public void testWriteYml() {
String filename = OpenCVTestRunner.getTempFileName("yml");
extractor.write(filename);
String truth = "%YAML:1.0\n---\nformat: 3\nname: \"Feature2D.KAZE\"\nextended: 0\nupright: 0\nthreshold: 0.0010000000474974513\noctaves: 4\nsublevels: 4\ndiffusivity: 1\n";
String actual = readFile(filename);
actual = actual.replaceAll("e([+-])0(\\d\\d)", "e$1$2"); // NOTE: workaround for different platforms double representation
assertEquals(truth, actual);
}
}

View File

@ -2,16 +2,12 @@
"whitelist":
{
"Feature2D": ["detect", "compute", "detectAndCompute", "descriptorSize", "descriptorType", "defaultNorm", "empty", "getDefaultName"],
"BRISK": ["create", "getDefaultName"],
"ORB": ["create", "setMaxFeatures", "setScaleFactor", "setNLevels", "setEdgeThreshold", "setFastThreshold", "setFirstLevel", "setWTA_K", "setScoreType", "setPatchSize", "getFastThreshold", "getDefaultName"],
"MSER": ["create", "detectRegions", "setDelta", "getDelta", "setMinArea", "getMinArea", "setMaxArea", "getMaxArea", "setPass2Only", "getPass2Only", "getDefaultName"],
"FastFeatureDetector": ["create", "setThreshold", "getThreshold", "setNonmaxSuppression", "getNonmaxSuppression", "setType", "getType", "getDefaultName"],
"AgastFeatureDetector": ["create", "setThreshold", "getThreshold", "setNonmaxSuppression", "getNonmaxSuppression", "setType", "getType", "getDefaultName"],
"GFTTDetector": ["create", "setMaxFeatures", "getMaxFeatures", "setQualityLevel", "getQualityLevel", "setMinDistance", "getMinDistance", "setBlockSize", "getBlockSize", "setHarrisDetector", "getHarrisDetector", "setK", "getK", "getDefaultName"],
"SimpleBlobDetector": ["create", "setParams", "getParams", "getDefaultName"],
"SimpleBlobDetector_Params": [],
"KAZE": ["create", "setExtended", "getExtended", "setUpright", "getUpright", "setThreshold", "getThreshold", "setNOctaves", "getNOctaves", "setNOctaveLayers", "getNOctaveLayers", "setDiffusivity", "getDiffusivity", "getDefaultName"],
"AKAZE": ["create", "setDescriptorType", "getDescriptorType", "setDescriptorSize", "getDescriptorSize", "setDescriptorChannels", "getDescriptorChannels", "setThreshold", "getThreshold", "setNOctaves", "getNOctaves", "setNOctaveLayers", "getNOctaveLayers", "setDiffusivity", "getDiffusivity", "getDefaultName"],
"DescriptorMatcher": ["add", "clear", "empty", "isMaskSupported", "train", "match", "knnMatch", "radiusMatch", "clone", "create"],
"BFMatcher": ["isMaskSupported", "create"],
"": ["drawKeypoints", "drawMatches", "drawMatchesKnn"]

View File

@ -6,8 +6,7 @@
}
},
"enum_fix" : {
"FastFeatureDetector" : { "DetectorType": "FastDetectorType" },
"AgastFeatureDetector" : { "DetectorType": "AgastDetectorType" }
"FastFeatureDetector" : { "DetectorType": "FastDetectorType" }
},
"func_arg_fix" : {
"Feature2D": {

View File

@ -1,9 +1,6 @@
#ifdef HAVE_OPENCV_FEATURES2D
typedef SimpleBlobDetector::Params SimpleBlobDetector_Params;
typedef AKAZE::DescriptorType AKAZE_DescriptorType;
typedef AgastFeatureDetector::DetectorType AgastFeatureDetector_DetectorType;
typedef FastFeatureDetector::DetectorType FastFeatureDetector_DetectorType;
typedef DescriptorMatcher::MatcherType DescriptorMatcher_MatcherType;
typedef KAZE::DiffusivityType KAZE_DiffusivityType;
typedef ORB::ScoreType ORB_ScoreType;
#endif

View File

@ -92,7 +92,7 @@ TrackedTarget = namedtuple('TrackedTarget', 'target, p0, p1, H, quad')
class PlaneTracker:
def __init__(self):
self.detector = cv.AKAZE_create(threshold = 0.003)
self.detector = cv.ORB_create( nfeatures = 1000 )
self.matcher = cv.FlannBasedMatcher(flann_params, {}) # bug : need to pass empty dict (#1329)
self.targets = []
self.frame_points = []

View File

@ -29,7 +29,7 @@ OCL_PERF_TEST_P(feature2d, detect, testing::Combine(Feature2DType::all(), TEST_I
OCL_PERF_TEST_P(feature2d, extract, testing::Combine(testing::Values(DETECTORS_EXTRACTORS), TEST_IMAGES))
{
Ptr<Feature2D> detector = AKAZE::create();
Ptr<Feature2D> detector = ORB::create();
Ptr<Feature2D> extractor = getFeature2D(get<0>(GetParam()));
std::string filename = getDataPath(get<1>(GetParam()));
Mat mimg = imread(filename, IMREAD_GRAYSCALE);

View File

@ -25,7 +25,7 @@ PERF_TEST_P(feature2d, detect, testing::Combine(Feature2DType::all(), TEST_IMAGE
PERF_TEST_P(feature2d, extract, testing::Combine(testing::Values(DETECTORS_EXTRACTORS), TEST_IMAGES))
{
Ptr<Feature2D> detector = AKAZE::create();
Ptr<Feature2D> detector = ORB::create();
Ptr<Feature2D> extractor = getFeature2D(get<0>(GetParam()));
std::string filename = getDataPath(get<1>(GetParam()));
Mat img = imread(filename, IMREAD_GRAYSCALE);

View File

@ -13,15 +13,10 @@ namespace opencv_test
FAST_DEFAULT, FAST_20_TRUE_TYPE5_8, FAST_20_TRUE_TYPE7_12, FAST_20_TRUE_TYPE9_16, \
FAST_20_FALSE_TYPE5_8, FAST_20_FALSE_TYPE7_12, FAST_20_FALSE_TYPE9_16, \
\
AGAST_DEFAULT, AGAST_5_8, AGAST_7_12d, AGAST_7_12s, AGAST_OAST_9_16, \
\
MSER_DEFAULT
#define DETECTORS_EXTRACTORS \
ORB_DEFAULT, ORB_1500_13_1, \
AKAZE_DEFAULT, AKAZE_DESCRIPTOR_KAZE, \
BRISK_DEFAULT, \
KAZE_DEFAULT, \
SIFT_DEFAULT
#define CV_ENUM_EXPAND(name, ...) CV_ENUM(name, __VA_ARGS__)
@ -58,24 +53,6 @@ static inline Ptr<Feature2D> getFeature2D(Feature2DType type)
return FastFeatureDetector::create(20, false, FastFeatureDetector::TYPE_7_12);
case FAST_20_FALSE_TYPE9_16:
return FastFeatureDetector::create(20, false, FastFeatureDetector::TYPE_9_16);
case AGAST_DEFAULT:
return AgastFeatureDetector::create();
case AGAST_5_8:
return AgastFeatureDetector::create(70, true, AgastFeatureDetector::AGAST_5_8);
case AGAST_7_12d:
return AgastFeatureDetector::create(70, true, AgastFeatureDetector::AGAST_7_12d);
case AGAST_7_12s:
return AgastFeatureDetector::create(70, true, AgastFeatureDetector::AGAST_7_12s);
case AGAST_OAST_9_16:
return AgastFeatureDetector::create(70, true, AgastFeatureDetector::OAST_9_16);
case AKAZE_DEFAULT:
return AKAZE::create();
case AKAZE_DESCRIPTOR_KAZE:
return AKAZE::create(AKAZE::DESCRIPTOR_KAZE);
case BRISK_DEFAULT:
return BRISK::create();
case KAZE_DEFAULT:
return KAZE::create();
case MSER_DEFAULT:
return MSER::create();
case SIFT_DEFAULT:

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,69 +0,0 @@
/* This is AGAST and OAST, an optimal and accelerated corner detector
based on the accelerated segment tests
Below is the original copyright and the references */
/*
Copyright (C) 2010 Elmar Mair
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
*Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
*Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
*Neither the name of the University of Cambridge nor the names of
its contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/*
The references are:
* Adaptive and Generic Corner Detection Based on the Accelerated Segment Test,
Elmar Mair and Gregory D. Hager and Darius Burschka
and Michael Suppa and Gerhard Hirzinger ECCV 2010
URL: http://www6.in.tum.de/Main/ResearchAgast
*/
#ifndef __OPENCV_FEATURES_2D_AGAST_HPP__
#define __OPENCV_FEATURES_2D_AGAST_HPP__
#ifdef __cplusplus
#include "precomp.hpp"
namespace cv
{
#if !(defined __i386__ || defined(_M_IX86) || defined __x86_64__ || defined(_M_X64))
int agast_tree_search(const uint32_t table_struct32[], int pixel_[], const unsigned char* const ptr, int threshold);
int AGAST_ALL_SCORE(const uchar* ptr, const int pixel[], int threshold, AgastFeatureDetector::DetectorType agasttype);
#endif //!(defined __i386__ || defined(_M_IX86) || defined __x86_64__ || defined(_M_X64))
void makeAgastOffsets(int pixel[16], int row_stride, AgastFeatureDetector::DetectorType type);
template<AgastFeatureDetector::DetectorType type>
int agast_cornerScore(const uchar* ptr, const int pixel[], int threshold);
}
#endif
#endif

View File

@ -1,276 +0,0 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2008, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of Intel Corporation may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
/*
OpenCV wrapper of reference implementation of
[1] Fast Explicit Diffusion for Accelerated Features in Nonlinear Scale Spaces.
Pablo F. Alcantarilla, J. Nuevo and Adrien Bartoli.
In British Machine Vision Conference (BMVC), Bristol, UK, September 2013
http://www.robesafe.com/personal/pablo.alcantarilla/papers/Alcantarilla13bmvc.pdf
@author Eugene Khvedchenya <ekhvedchenya@gmail.com>
*/
#include "precomp.hpp"
#include "kaze/AKAZEFeatures.h"
#include <iostream>
namespace cv
{
using namespace std;
class AKAZE_Impl : public AKAZE
{
public:
AKAZE_Impl(DescriptorType _descriptor_type, int _descriptor_size, int _descriptor_channels,
float _threshold, int _octaves, int _sublevels, KAZE::DiffusivityType _diffusivity, int _max_points)
: descriptor(_descriptor_type)
, descriptor_channels(_descriptor_channels)
, descriptor_size(_descriptor_size)
, threshold(_threshold)
, octaves(_octaves)
, sublevels(_sublevels)
, diffusivity(_diffusivity)
, max_points(_max_points)
{
}
virtual ~AKAZE_Impl() CV_OVERRIDE
{
}
void setDescriptorType(DescriptorType dtype) CV_OVERRIDE{ descriptor = dtype; }
DescriptorType getDescriptorType() const CV_OVERRIDE{ return descriptor; }
void setDescriptorSize(int dsize) CV_OVERRIDE { descriptor_size = dsize; }
int getDescriptorSize() const CV_OVERRIDE { return descriptor_size; }
void setDescriptorChannels(int dch) CV_OVERRIDE { descriptor_channels = dch; }
int getDescriptorChannels() const CV_OVERRIDE { return descriptor_channels; }
void setThreshold(double threshold_) CV_OVERRIDE { threshold = (float)threshold_; }
double getThreshold() const CV_OVERRIDE { return threshold; }
void setNOctaves(int octaves_) CV_OVERRIDE { octaves = octaves_; }
int getNOctaves() const CV_OVERRIDE { return octaves; }
void setNOctaveLayers(int octaveLayers_) CV_OVERRIDE { sublevels = octaveLayers_; }
int getNOctaveLayers() const CV_OVERRIDE { return sublevels; }
void setDiffusivity(KAZE::DiffusivityType diff_) CV_OVERRIDE{ diffusivity = diff_; }
KAZE::DiffusivityType getDiffusivity() const CV_OVERRIDE{ return diffusivity; }
void setMaxPoints(int max_points_) CV_OVERRIDE { max_points = max_points_; }
int getMaxPoints() const CV_OVERRIDE { return max_points; }
// returns the descriptor size in bytes
int descriptorSize() const CV_OVERRIDE
{
switch (descriptor)
{
case DESCRIPTOR_KAZE:
case DESCRIPTOR_KAZE_UPRIGHT:
return 64;
case DESCRIPTOR_MLDB:
case DESCRIPTOR_MLDB_UPRIGHT:
// We use the full length binary descriptor -> 486 bits
if (descriptor_size == 0)
{
int t = (6 + 36 + 120) * descriptor_channels;
return divUp(t, 8);
}
else
{
// We use the random bit selection length binary descriptor
return divUp(descriptor_size, 8);
}
default:
return -1;
}
}
// returns the descriptor type
int descriptorType() const CV_OVERRIDE
{
switch (descriptor)
{
case DESCRIPTOR_KAZE:
case DESCRIPTOR_KAZE_UPRIGHT:
return CV_32F;
case DESCRIPTOR_MLDB:
case DESCRIPTOR_MLDB_UPRIGHT:
return CV_8U;
default:
return -1;
}
}
// returns the default norm type
int defaultNorm() const CV_OVERRIDE
{
switch (descriptor)
{
case DESCRIPTOR_KAZE:
case DESCRIPTOR_KAZE_UPRIGHT:
return NORM_L2;
case DESCRIPTOR_MLDB:
case DESCRIPTOR_MLDB_UPRIGHT:
return NORM_HAMMING;
default:
return -1;
}
}
void detectAndCompute(InputArray image, InputArray mask,
std::vector<KeyPoint>& keypoints,
OutputArray descriptors,
bool useProvidedKeypoints) CV_OVERRIDE
{
CV_INSTRUMENT_REGION();
CV_Assert( ! image.empty() );
AKAZEOptions options;
options.descriptor = descriptor;
options.descriptor_channels = descriptor_channels;
options.descriptor_size = descriptor_size;
options.img_width = image.cols();
options.img_height = image.rows();
options.dthreshold = threshold;
options.omax = octaves;
options.nsublevels = sublevels;
options.diffusivity = diffusivity;
AKAZEFeatures impl(options);
impl.Create_Nonlinear_Scale_Space(image);
if (!useProvidedKeypoints)
{
impl.Feature_Detection(keypoints);
}
if (!mask.empty())
{
KeyPointsFilter::runByPixelsMask(keypoints, mask.getMat());
}
if (max_points > 0 && (int)keypoints.size() > max_points) {
std::partial_sort(keypoints.begin(), keypoints.begin() + max_points, keypoints.end(),
[](const cv::KeyPoint& k1, const cv::KeyPoint& k2) {return k1.response > k2.response;});
keypoints.erase(keypoints.begin() + max_points, keypoints.end());
}
if(descriptors.needed())
{
impl.Compute_Descriptors(keypoints, descriptors);
CV_Assert((descriptors.empty() || descriptors.cols() == descriptorSize()));
CV_Assert((descriptors.empty() || (descriptors.type() == descriptorType())));
}
}
void write(FileStorage& fs) const CV_OVERRIDE
{
writeFormat(fs);
fs << "name" << getDefaultName();
fs << "descriptor" << descriptor;
fs << "descriptor_channels" << descriptor_channels;
fs << "descriptor_size" << descriptor_size;
fs << "threshold" << threshold;
fs << "octaves" << octaves;
fs << "sublevels" << sublevels;
fs << "diffusivity" << diffusivity;
fs << "max_points" << max_points;
}
void read(const FileNode& fn) CV_OVERRIDE
{
// if node is empty, keep previous value
if (!fn["descriptor"].empty())
descriptor = static_cast<DescriptorType>((int)fn["descriptor"]);
if (!fn["descriptor_channels"].empty())
descriptor_channels = (int)fn["descriptor_channels"];
if (!fn["descriptor_size"].empty())
descriptor_size = (int)fn["descriptor_size"];
if (!fn["threshold"].empty())
threshold = (float)fn["threshold"];
if (!fn["octaves"].empty())
octaves = (int)fn["octaves"];
if (!fn["sublevels"].empty())
sublevels = (int)fn["sublevels"];
if (!fn["diffusivity"].empty())
diffusivity = static_cast<KAZE::DiffusivityType>((int)fn["diffusivity"]);
if (!fn["max_points"].empty())
max_points = (int)fn["max_points"];
}
DescriptorType descriptor;
int descriptor_channels;
int descriptor_size;
float threshold;
int octaves;
int sublevels;
KAZE::DiffusivityType diffusivity;
int max_points;
};
Ptr<AKAZE> AKAZE::create(DescriptorType descriptor_type,
int descriptor_size, int descriptor_channels,
float threshold, int octaves,
int sublevels, KAZE::DiffusivityType diffusivity, int max_points)
{
return makePtr<AKAZE_Impl>(descriptor_type, descriptor_size, descriptor_channels,
threshold, octaves, sublevels, diffusivity, max_points);
}
String AKAZE::getDefaultName() const
{
return (Feature2D::getDefaultName() + ".AKAZE");
}
}

View File

@ -1,216 +0,0 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// Intel License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000, Intel Corporation, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of Intel Corporation may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include "precomp.hpp"
namespace cv
{
BOWTrainer::BOWTrainer() : size(0)
{}
BOWTrainer::~BOWTrainer()
{}
void BOWTrainer::add( const Mat& _descriptors )
{
CV_Assert( !_descriptors.empty() );
if( !descriptors.empty() )
{
CV_Assert( descriptors[0].cols == _descriptors.cols );
CV_Assert( descriptors[0].type() == _descriptors.type() );
size += _descriptors.rows;
}
else
{
size = _descriptors.rows;
}
descriptors.push_back(_descriptors);
}
const std::vector<Mat>& BOWTrainer::getDescriptors() const
{
return descriptors;
}
int BOWTrainer::descriptorsCount() const
{
return descriptors.empty() ? 0 : size;
}
void BOWTrainer::clear()
{
descriptors.clear();
}
BOWKMeansTrainer::BOWKMeansTrainer( int _clusterCount, const TermCriteria& _termcrit,
int _attempts, int _flags ) :
clusterCount(_clusterCount), termcrit(_termcrit), attempts(_attempts), flags(_flags)
{}
Mat BOWKMeansTrainer::cluster() const
{
CV_INSTRUMENT_REGION();
CV_Assert( !descriptors.empty() );
Mat mergedDescriptors( descriptorsCount(), descriptors[0].cols, descriptors[0].type() );
for( size_t i = 0, start = 0; i < descriptors.size(); i++ )
{
Mat submut = mergedDescriptors.rowRange((int)start, (int)(start + descriptors[i].rows));
descriptors[i].copyTo(submut);
start += descriptors[i].rows;
}
return cluster( mergedDescriptors );
}
BOWKMeansTrainer::~BOWKMeansTrainer()
{}
Mat BOWKMeansTrainer::cluster( const Mat& _descriptors ) const
{
CV_INSTRUMENT_REGION();
Mat labels, vocabulary;
kmeans( _descriptors, clusterCount, labels, termcrit, attempts, flags, vocabulary );
return vocabulary;
}
BOWImgDescriptorExtractor::BOWImgDescriptorExtractor( const Ptr<DescriptorExtractor>& _dextractor,
const Ptr<DescriptorMatcher>& _dmatcher ) :
dextractor(_dextractor), dmatcher(_dmatcher)
{}
BOWImgDescriptorExtractor::BOWImgDescriptorExtractor( const Ptr<DescriptorMatcher>& _dmatcher ) :
dmatcher(_dmatcher)
{}
BOWImgDescriptorExtractor::~BOWImgDescriptorExtractor()
{}
void BOWImgDescriptorExtractor::setVocabulary( const Mat& _vocabulary )
{
dmatcher->clear();
vocabulary = _vocabulary;
dmatcher->add( std::vector<Mat>(1, vocabulary) );
}
const Mat& BOWImgDescriptorExtractor::getVocabulary() const
{
return vocabulary;
}
void BOWImgDescriptorExtractor::compute( InputArray image, std::vector<KeyPoint>& keypoints, OutputArray imgDescriptor,
std::vector<std::vector<int> >* pointIdxsOfClusters, Mat* descriptors )
{
CV_INSTRUMENT_REGION();
imgDescriptor.release();
if( keypoints.empty() )
return;
// Compute descriptors for the image.
Mat _descriptors;
dextractor->compute( image, keypoints, _descriptors );
compute( _descriptors, imgDescriptor, pointIdxsOfClusters );
// Add the descriptors of image keypoints
if (descriptors) {
*descriptors = _descriptors.clone();
}
}
int BOWImgDescriptorExtractor::descriptorSize() const
{
return vocabulary.empty() ? 0 : vocabulary.rows;
}
int BOWImgDescriptorExtractor::descriptorType() const
{
return CV_32FC1;
}
void BOWImgDescriptorExtractor::compute( InputArray keypointDescriptors, OutputArray _imgDescriptor, std::vector<std::vector<int> >* pointIdxsOfClusters )
{
CV_INSTRUMENT_REGION();
CV_Assert( !vocabulary.empty() );
CV_Assert(!keypointDescriptors.empty());
int clusterCount = descriptorSize(); // = vocabulary.rows
// Match keypoint descriptors to cluster center (to vocabulary)
std::vector<DMatch> matches;
dmatcher->match( keypointDescriptors, matches );
// Compute image descriptor
if( pointIdxsOfClusters )
{
pointIdxsOfClusters->clear();
pointIdxsOfClusters->resize(clusterCount);
}
_imgDescriptor.create(1, clusterCount, descriptorType());
_imgDescriptor.setTo(Scalar::all(0));
Mat imgDescriptor = _imgDescriptor.getMat();
float *dptr = imgDescriptor.ptr<float>();
for( size_t i = 0; i < matches.size(); i++ )
{
int queryIdx = matches[i].queryIdx;
int trainIdx = matches[i].trainIdx; // cluster index
CV_Assert( queryIdx == (int)i );
dptr[trainIdx] = dptr[trainIdx] + 1.f;
if( pointIdxsOfClusters )
(*pointIdxsOfClusters)[trainIdx].push_back( queryIdx );
}
// Normalize image descriptor.
imgDescriptor /= keypointDescriptors.size().height;
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,213 +0,0 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2008, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of Intel Corporation may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
/*
OpenCV wrapper of reference implementation of
[1] KAZE Features. Pablo F. Alcantarilla, Adrien Bartoli and Andrew J. Davison.
In European Conference on Computer Vision (ECCV), Fiorenze, Italy, October 2012
http://www.robesafe.com/personal/pablo.alcantarilla/papers/Alcantarilla12eccv.pdf
@author Eugene Khvedchenya <ekhvedchenya@gmail.com>
*/
#include "precomp.hpp"
#include "kaze/KAZEFeatures.h"
namespace cv
{
class KAZE_Impl CV_FINAL : public KAZE
{
public:
KAZE_Impl(bool _extended, bool _upright, float _threshold, int _octaves,
int _sublevels, KAZE::DiffusivityType _diffusivity)
: extended(_extended)
, upright(_upright)
, threshold(_threshold)
, octaves(_octaves)
, sublevels(_sublevels)
, diffusivity(_diffusivity)
{
}
virtual ~KAZE_Impl() CV_OVERRIDE {}
void setExtended(bool extended_) CV_OVERRIDE { extended = extended_; }
bool getExtended() const CV_OVERRIDE { return extended; }
void setUpright(bool upright_) CV_OVERRIDE { upright = upright_; }
bool getUpright() const CV_OVERRIDE { return upright; }
void setThreshold(double threshold_) CV_OVERRIDE { threshold = (float)threshold_; }
double getThreshold() const CV_OVERRIDE { return threshold; }
void setNOctaves(int octaves_) CV_OVERRIDE { octaves = octaves_; }
int getNOctaves() const CV_OVERRIDE { return octaves; }
void setNOctaveLayers(int octaveLayers_) CV_OVERRIDE { sublevels = octaveLayers_; }
int getNOctaveLayers() const CV_OVERRIDE { return sublevels; }
void setDiffusivity(KAZE::DiffusivityType diff_) CV_OVERRIDE{ diffusivity = diff_; }
KAZE::DiffusivityType getDiffusivity() const CV_OVERRIDE{ return diffusivity; }
// returns the descriptor size in bytes
int descriptorSize() const CV_OVERRIDE
{
return extended ? 128 : 64;
}
// returns the descriptor type
int descriptorType() const CV_OVERRIDE
{
return CV_32F;
}
// returns the default norm type
int defaultNorm() const CV_OVERRIDE
{
return NORM_L2;
}
void detectAndCompute(InputArray image, InputArray mask,
std::vector<KeyPoint>& keypoints,
OutputArray descriptors,
bool useProvidedKeypoints) CV_OVERRIDE
{
CV_INSTRUMENT_REGION();
cv::Mat img = image.getMat();
if (img.channels() > 1)
cvtColor(image, img, COLOR_BGR2GRAY);
Mat img1_32;
if ( img.depth() == CV_32F )
img1_32 = img;
else if ( img.depth() == CV_8U )
img.convertTo(img1_32, CV_32F, 1.0 / 255.0, 0);
else if ( img.depth() == CV_16U )
img.convertTo(img1_32, CV_32F, 1.0 / 65535.0, 0);
CV_Assert( ! img1_32.empty() );
KAZEOptions options;
options.img_width = img.cols;
options.img_height = img.rows;
options.extended = extended;
options.upright = upright;
options.dthreshold = threshold;
options.omax = octaves;
options.nsublevels = sublevels;
options.diffusivity = diffusivity;
KAZEFeatures impl(options);
impl.Create_Nonlinear_Scale_Space(img1_32);
if (!useProvidedKeypoints)
{
impl.Feature_Detection(keypoints);
}
if (!mask.empty())
{
cv::KeyPointsFilter::runByPixelsMask(keypoints, mask.getMat());
}
if( descriptors.needed() )
{
Mat desc;
impl.Feature_Description(keypoints, desc);
desc.copyTo(descriptors);
CV_Assert((!desc.rows || desc.cols == descriptorSize()));
CV_Assert((!desc.rows || (desc.type() == descriptorType())));
}
}
void write(FileStorage& fs) const CV_OVERRIDE
{
writeFormat(fs);
fs << "name" << getDefaultName();
fs << "extended" << (int)extended;
fs << "upright" << (int)upright;
fs << "threshold" << threshold;
fs << "octaves" << octaves;
fs << "sublevels" << sublevels;
fs << "diffusivity" << diffusivity;
}
void read(const FileNode& fn) CV_OVERRIDE
{
// if node is empty, keep previous value
if (!fn["extended"].empty())
extended = (int)fn["extended"] != 0;
if (!fn["upright"].empty())
upright = (int)fn["upright"] != 0;
if (!fn["threshold"].empty())
threshold = (float)fn["threshold"];
if (!fn["octaves"].empty())
octaves = (int)fn["octaves"];
if (!fn["sublevels"].empty())
sublevels = (int)fn["sublevels"];
if (!fn["diffusivity"].empty())
diffusivity = static_cast<KAZE::DiffusivityType>((int)fn["diffusivity"]);
}
bool extended;
bool upright;
float threshold;
int octaves;
int sublevels;
KAZE::DiffusivityType diffusivity;
};
Ptr<KAZE> KAZE::create(bool extended, bool upright,
float threshold,
int octaves, int sublevels,
KAZE::DiffusivityType diffusivity)
{
return makePtr<KAZE_Impl>(extended, upright, threshold, octaves, sublevels, diffusivity);
}
String KAZE::getDefaultName() const
{
return (Feature2D::getDefaultName() + ".KAZE");
}
}

View File

@ -1,65 +0,0 @@
/**
* @file AKAZEConfig.h
* @brief AKAZE configuration file
* @date Feb 23, 2014
* @author Pablo F. Alcantarilla, Jesus Nuevo
*/
#ifndef __OPENCV_FEATURES_2D_AKAZE_CONFIG_H__
#define __OPENCV_FEATURES_2D_AKAZE_CONFIG_H__
namespace cv
{
/* ************************************************************************* */
/// AKAZE configuration options structure
struct AKAZEOptions {
AKAZEOptions()
: omax(4)
, nsublevels(4)
, img_width(0)
, img_height(0)
, soffset(1.6f)
, derivative_factor(1.5f)
, sderivatives(1.0)
, diffusivity(KAZE::DIFF_PM_G2)
, dthreshold(0.001f)
, min_dthreshold(0.00001f)
, descriptor(AKAZE::DESCRIPTOR_MLDB)
, descriptor_size(0)
, descriptor_channels(3)
, descriptor_pattern_size(10)
, kcontrast(0.001f)
, kcontrast_percentile(0.7f)
, kcontrast_nbins(300)
{
}
int omax; ///< Maximum octave evolution of the image 2^sigma (coarsest scale sigma units)
int nsublevels; ///< Default number of sublevels per scale level
int img_width; ///< Width of the input image
int img_height; ///< Height of the input image
float soffset; ///< Base scale offset (sigma units)
float derivative_factor; ///< Factor for the multiscale derivatives
float sderivatives; ///< Smoothing factor for the derivatives
KAZE::DiffusivityType diffusivity; ///< Diffusivity type
float dthreshold; ///< Detector response threshold to accept point
float min_dthreshold; ///< Minimum detector threshold to accept a point
AKAZE::DescriptorType descriptor; ///< Type of descriptor
int descriptor_size; ///< Size of the descriptor in bits. 0->Full size
int descriptor_channels; ///< Number of channels in the descriptor (1, 2, 3)
int descriptor_pattern_size; ///< Actual patch size is 2*pattern_size*point.scale
float kcontrast; ///< The contrast factor parameter
float kcontrast_percentile; ///< Percentile level for the contrast factor
int kcontrast_nbins; ///< Number of bins for the contrast factor histogram
};
}
#endif

File diff suppressed because it is too large Load Diff

View File

@ -1,115 +0,0 @@
/**
* @file AKAZE.h
* @brief Main class for detecting and computing binary descriptors in an
* accelerated nonlinear scale space
* @date Mar 27, 2013
* @author Pablo F. Alcantarilla, Jesus Nuevo
*/
#ifndef __OPENCV_FEATURES_2D_AKAZE_FEATURES_H__
#define __OPENCV_FEATURES_2D_AKAZE_FEATURES_H__
/* ************************************************************************* */
// Includes
#include "AKAZEConfig.h"
namespace cv
{
/// A-KAZE nonlinear diffusion filtering evolution
template <typename MatType>
struct Evolution
{
Evolution() {
etime = 0.0f;
esigma = 0.0f;
octave = 0;
sublevel = 0;
sigma_size = 0;
octave_ratio = 0.0f;
border = 0;
}
template <typename T>
explicit Evolution(const Evolution<T> &other) {
size = other.size;
etime = other.etime;
esigma = other.esigma;
octave = other.octave;
sublevel = other.sublevel;
sigma_size = other.sigma_size;
octave_ratio = other.octave_ratio;
border = other.border;
other.Lx.copyTo(Lx);
other.Ly.copyTo(Ly);
other.Lt.copyTo(Lt);
other.Lsmooth.copyTo(Lsmooth);
other.Ldet.copyTo(Ldet);
}
MatType Lx, Ly; ///< First order spatial derivatives
MatType Lt; ///< Evolution image
MatType Lsmooth; ///< Smoothed image, used only for computing determinant, released afterwards
MatType Ldet; ///< Detector response
Size size; ///< Size of the layer
float etime; ///< Evolution time
float esigma; ///< Evolution sigma. For linear diffusion t = sigma^2 / 2
int octave; ///< Image octave
int sublevel; ///< Image sublevel in each octave
int sigma_size; ///< Integer esigma. For computing the feature detector responses
float octave_ratio; ///< Scaling ratio of this octave. ratio = 2^octave
int border; ///< Width of border where descriptors cannot be computed
};
typedef Evolution<Mat> MEvolution;
typedef Evolution<UMat> UEvolution;
typedef std::vector<MEvolution> Pyramid;
typedef std::vector<UEvolution> UMatPyramid;
/* ************************************************************************* */
// AKAZE Class Declaration
class AKAZEFeatures {
private:
AKAZEOptions options_; ///< Configuration options for AKAZE
Pyramid evolution_; ///< Vector of nonlinear diffusion evolution
/// FED parameters
int ncycles_; ///< Number of cycles
bool reordering_; ///< Flag for reordering time steps
std::vector<std::vector<float > > tsteps_; ///< Vector of FED dynamic time steps
std::vector<int> nsteps_; ///< Vector of number of steps per cycle
/// Matrices for the M-LDB descriptor computation
cv::Mat descriptorSamples_; // List of positions in the grids to sample LDB bits from.
cv::Mat descriptorBits_;
cv::Mat bitMask_;
/// Scale Space methods
void Allocate_Memory_Evolution();
void Find_Scale_Space_Extrema(std::vector<Mat>& keypoints_by_layers);
void Do_Subpixel_Refinement(std::vector<Mat>& keypoints_by_layers,
std::vector<KeyPoint>& kpts);
/// Feature description methods
void Compute_Keypoints_Orientation(std::vector<cv::KeyPoint>& kpts) const;
public:
/// Constructor with input arguments
AKAZEFeatures(const AKAZEOptions& options);
void Create_Nonlinear_Scale_Space(InputArray img);
void Feature_Detection(std::vector<cv::KeyPoint>& kpts);
void Compute_Descriptors(std::vector<cv::KeyPoint>& kpts, OutputArray desc);
};
/* ************************************************************************* */
/// Inline functions
void generateDescriptorSubsample(cv::Mat& sampleList, cv::Mat& comparisons,
int nbits, int pattern_size, int nchannels);
}
#endif

View File

@ -1,56 +0,0 @@
/**
* @file KAZEConfig.h
* @brief Configuration file
* @date Dec 27, 2011
* @author Pablo F. Alcantarilla
*/
#ifndef __OPENCV_FEATURES_2D_KAZE_CONFIG_H__
#define __OPENCV_FEATURES_2D_KAZE_CONFIG_H__
// OpenCV Includes
#include "../precomp.hpp"
#include <opencv2/features2d.hpp>
namespace cv
{
//*************************************************************************************
struct KAZEOptions {
KAZEOptions()
: diffusivity(KAZE::DIFF_PM_G2)
, soffset(1.60f)
, omax(4)
, nsublevels(4)
, img_width(0)
, img_height(0)
, sderivatives(1.0f)
, dthreshold(0.001f)
, kcontrast(0.01f)
, kcontrast_percentille(0.7f)
, kcontrast_bins(300)
, upright(false)
, extended(false)
{
}
KAZE::DiffusivityType diffusivity;
float soffset;
int omax;
int nsublevels;
int img_width;
int img_height;
float sderivatives;
float dthreshold;
float kcontrast;
float kcontrast_percentille;
int kcontrast_bins;
bool upright;
bool extended;
};
}
#endif

File diff suppressed because it is too large Load Diff

View File

@ -1,64 +0,0 @@
/**
* @file KAZE.h
* @brief Main program for detecting and computing descriptors in a nonlinear
* scale space
* @date Jan 21, 2012
* @author Pablo F. Alcantarilla
*/
#ifndef __OPENCV_FEATURES_2D_KAZE_FEATURES_H__
#define __OPENCV_FEATURES_2D_KAZE_FEATURES_H__
/* ************************************************************************* */
// Includes
#include "KAZEConfig.h"
#include "nldiffusion_functions.h"
#include "fed.h"
#include "TEvolution.h"
namespace cv
{
/* ************************************************************************* */
// KAZE Class Declaration
class KAZEFeatures
{
private:
/// Parameters of the Nonlinear diffusion class
KAZEOptions options_; ///< Configuration options for KAZE
std::vector<TEvolution> evolution_; ///< Vector of nonlinear diffusion evolution
/// Vector of keypoint vectors for finding extrema in multiple threads
std::vector<std::vector<cv::KeyPoint> > kpts_par_;
/// FED parameters
int ncycles_; ///< Number of cycles
bool reordering_; ///< Flag for reordering time steps
std::vector<std::vector<float > > tsteps_; ///< Vector of FED dynamic time steps
std::vector<int> nsteps_; ///< Vector of number of steps per cycle
public:
/// Constructor
KAZEFeatures(KAZEOptions& options);
/// Public methods for KAZE interface
void Allocate_Memory_Evolution(void);
int Create_Nonlinear_Scale_Space(const cv::Mat& img);
void Feature_Detection(std::vector<cv::KeyPoint>& kpts);
void Feature_Description(std::vector<cv::KeyPoint>& kpts, cv::Mat& desc);
static void Compute_Main_Orientation(cv::KeyPoint& kpt, const std::vector<TEvolution>& evolution_, const KAZEOptions& options);
/// Feature Detection Methods
void Compute_KContrast(const cv::Mat& img, const float& kper);
void Compute_Multiscale_Derivatives(void);
void Compute_Detector_Response(void);
void Determinant_Hessian(std::vector<cv::KeyPoint>& kpts);
void Do_Subpixel_Refinement(std::vector<cv::KeyPoint>& kpts);
};
}
#endif

View File

@ -1,41 +0,0 @@
/**
* @file TEvolution.h
* @brief Header file with the declaration of the TEvolution struct
* @date Jun 02, 2014
* @author Pablo F. Alcantarilla
*/
#ifndef __OPENCV_FEATURES_2D_TEVOLUTION_H__
#define __OPENCV_FEATURES_2D_TEVOLUTION_H__
namespace cv
{
/* ************************************************************************* */
/// KAZE/A-KAZE nonlinear diffusion filtering evolution
struct TEvolution
{
TEvolution() {
etime = 0.0f;
esigma = 0.0f;
octave = 0;
sublevel = 0;
sigma_size = 0;
}
Mat Lx, Ly; ///< First order spatial derivatives
Mat Lxx, Lxy, Lyy; ///< Second order spatial derivatives
Mat Lt; ///< Evolution image
Mat Lsmooth; ///< Smoothed image
Mat Ldet; ///< Detector response
float etime; ///< Evolution time
float esigma; ///< Evolution sigma. For linear diffusion t = sigma^2 / 2
int octave; ///< Image octave
int sublevel; ///< Image sublevel in each octave
int sigma_size; ///< Integer esigma. For computing the feature detector responses
};
}
#endif

View File

@ -1,192 +0,0 @@
//=============================================================================
//
// fed.cpp
// Authors: Pablo F. Alcantarilla (1), Jesus Nuevo (2)
// Institutions: Georgia Institute of Technology (1)
// TrueVision Solutions (2)
// Date: 15/09/2013
// Email: pablofdezalc@gmail.com
//
// AKAZE Features Copyright 2013, Pablo F. Alcantarilla, Jesus Nuevo
// All Rights Reserved
// See LICENSE for the license information
//=============================================================================
/**
* @file fed.cpp
* @brief Functions for performing Fast Explicit Diffusion and building the
* nonlinear scale space
* @date Sep 15, 2013
* @author Pablo F. Alcantarilla, Jesus Nuevo
* @note This code is derived from FED/FJ library from Grewenig et al.,
* The FED/FJ library allows solving more advanced problems
* Please look at the following papers for more information about FED:
* [1] S. Grewenig, J. Weickert, C. Schroers, A. Bruhn. Cyclic Schemes for
* PDE-Based Image Analysis. Technical Report No. 327, Department of Mathematics,
* Saarland University, Saarbrücken, Germany, March 2013
* [2] S. Grewenig, J. Weickert, A. Bruhn. From box filtering to fast explicit diffusion.
* DAGM, 2010
*
*/
#include "../precomp.hpp"
#include "fed.h"
using namespace std;
//*************************************************************************************
//*************************************************************************************
/**
* @brief This function allocates an array of the least number of time steps such
* that a certain stopping time for the whole process can be obtained and fills
* it with the respective FED time step sizes for one cycle
* The function returns the number of time steps per cycle or 0 on failure
* @param T Desired process stopping time
* @param M Desired number of cycles
* @param tau_max Stability limit for the explicit scheme
* @param reordering Reordering flag
* @param tau The vector with the dynamic step sizes
*/
int fed_tau_by_process_time(const float& T, const int& M, const float& tau_max,
const bool& reordering, std::vector<float>& tau) {
// All cycles have the same fraction of the stopping time
return fed_tau_by_cycle_time(T/(float)M,tau_max,reordering,tau);
}
//*************************************************************************************
//*************************************************************************************
/**
* @brief This function allocates an array of the least number of time steps such
* that a certain stopping time for the whole process can be obtained and fills it
* it with the respective FED time step sizes for one cycle
* The function returns the number of time steps per cycle or 0 on failure
* @param t Desired cycle stopping time
* @param tau_max Stability limit for the explicit scheme
* @param reordering Reordering flag
* @param tau The vector with the dynamic step sizes
*/
int fed_tau_by_cycle_time(const float& t, const float& tau_max,
const bool& reordering, std::vector<float> &tau) {
int n = 0; // Number of time steps
float scale = 0.0; // Ratio of t we search to maximal t
// Compute necessary number of time steps
n = cvCeil(sqrtf(3.0f*t/tau_max+0.25f)-0.5f-1.0e-8f);
scale = 3.0f*t/(tau_max*(float)(n*(n+1)));
// Call internal FED time step creation routine
return fed_tau_internal(n,scale,tau_max,reordering,tau);
}
//*************************************************************************************
//*************************************************************************************
/**
* @brief This function allocates an array of time steps and fills it with FED
* time step sizes
* The function returns the number of time steps per cycle or 0 on failure
* @param n Number of internal steps
* @param scale Ratio of t we search to maximal t
* @param tau_max Stability limit for the explicit scheme
* @param reordering Reordering flag
* @param tau The vector with the dynamic step sizes
*/
int fed_tau_internal(const int& n, const float& scale, const float& tau_max,
const bool& reordering, std::vector<float> &tau) {
float c = 0.0, d = 0.0; // Time savers
vector<float> tauh; // Helper vector for unsorted taus
if (n <= 0) {
return 0;
}
// Allocate memory for the time step size
tau = vector<float>(n);
if (reordering) {
tauh = vector<float>(n);
}
// Compute time saver
c = 1.0f / (4.0f * (float)n + 2.0f);
d = scale * tau_max / 2.0f;
// Set up originally ordered tau vector
for (int k = 0; k < n; ++k) {
float h = cosf((float)CV_PI * (2.0f * (float)k + 1.0f) * c);
if (reordering) {
tauh[k] = d / (h * h);
}
else {
tau[k] = d / (h * h);
}
}
// Permute list of time steps according to chosen reordering function
int kappa = 0, prime = 0;
if (reordering == true) {
// Choose kappa cycle with k = n/2
// This is a heuristic. We can use Leja ordering instead!!
kappa = n / 2;
// Get modulus for permutation
prime = n + 1;
while (!fed_is_prime_internal(prime)) {
prime++;
}
// Perform permutation
for (int k = 0, l = 0; l < n; ++k, ++l) {
int index = 0;
while ((index = ((k+1)*kappa) % prime - 1) >= n) {
k++;
}
tau[l] = tauh[index];
}
}
return n;
}
//*************************************************************************************
//*************************************************************************************
/**
* @brief This function checks if a number is prime or not
* @param number Number to check if it is prime or not
* @return true if the number is prime
*/
bool fed_is_prime_internal(const int& number) {
bool is_prime = false;
if (number <= 1) {
return false;
}
else if (number == 1 || number == 2 || number == 3 || number == 5 || number == 7) {
return true;
}
else if ((number % 2) == 0 || (number % 3) == 0 || (number % 5) == 0 || (number % 7) == 0) {
return false;
}
else {
is_prime = true;
int upperLimit = (int)sqrt(1.0f + number);
int divisor = 11;
while (divisor <= upperLimit ) {
if (number % divisor == 0)
{
is_prime = false;
}
divisor +=2;
}
return is_prime;
}
}

View File

@ -1,25 +0,0 @@
#ifndef __OPENCV_FEATURES_2D_FED_H__
#define __OPENCV_FEATURES_2D_FED_H__
//******************************************************************************
//******************************************************************************
// Includes
#include <vector>
//*************************************************************************************
//*************************************************************************************
// Declaration of functions
int fed_tau_by_process_time(const float& T, const int& M, const float& tau_max,
const bool& reordering, std::vector<float>& tau);
int fed_tau_by_cycle_time(const float& t, const float& tau_max,
const bool& reordering, std::vector<float> &tau) ;
int fed_tau_internal(const int& n, const float& scale, const float& tau_max,
const bool& reordering, std::vector<float> &tau);
bool fed_is_prime_internal(const int& number);
//*************************************************************************************
//*************************************************************************************
#endif // __OPENCV_FEATURES_2D_FED_H__

View File

@ -1,542 +0,0 @@
//=============================================================================
//
// nldiffusion_functions.cpp
// Author: Pablo F. Alcantarilla
// Institution: University d'Auvergne
// Address: Clermont Ferrand, France
// Date: 27/12/2011
// Email: pablofdezalc@gmail.com
//
// KAZE Features Copyright 2012, Pablo F. Alcantarilla
// All Rights Reserved
// See LICENSE for the license information
//=============================================================================
/**
* @file nldiffusion_functions.cpp
* @brief Functions for non-linear diffusion applications:
* 2D Gaussian Derivatives
* Perona and Malik conductivity equations
* Perona and Malik evolution
* @date Dec 27, 2011
* @author Pablo F. Alcantarilla
*/
#include "../precomp.hpp"
#include "nldiffusion_functions.h"
#include <iostream>
// Namespaces
/* ************************************************************************* */
namespace cv
{
using namespace std;
/* ************************************************************************* */
/**
* @brief This function smoothes an image with a Gaussian kernel
* @param src Input image
* @param dst Output image
* @param ksize_x Kernel size in X-direction (horizontal)
* @param ksize_y Kernel size in Y-direction (vertical)
* @param sigma Kernel standard deviation
*/
void gaussian_2D_convolution(const cv::Mat& src, cv::Mat& dst, int ksize_x, int ksize_y, float sigma) {
int ksize_x_ = 0, ksize_y_ = 0;
// Compute an appropriate kernel size according to the specified sigma
if (sigma > ksize_x || sigma > ksize_y || ksize_x == 0 || ksize_y == 0) {
ksize_x_ = cvCeil(2.0f*(1.0f + (sigma - 0.8f) / (0.3f)));
ksize_y_ = ksize_x_;
}
// The kernel size must be and odd number
if ((ksize_x_ % 2) == 0) {
ksize_x_ += 1;
}
if ((ksize_y_ % 2) == 0) {
ksize_y_ += 1;
}
// Perform the Gaussian Smoothing with border replication
GaussianBlur(src, dst, Size(ksize_x_, ksize_y_), sigma, sigma, BORDER_REPLICATE);
}
/* ************************************************************************* */
/**
* @brief This function computes image derivatives with Scharr kernel
* @param src Input image
* @param dst Output image
* @param xorder Derivative order in X-direction (horizontal)
* @param yorder Derivative order in Y-direction (vertical)
* @note Scharr operator approximates better rotation invariance than
* other stencils such as Sobel. See Weickert and Scharr,
* A Scheme for Coherence-Enhancing Diffusion Filtering with Optimized Rotation Invariance,
* Journal of Visual Communication and Image Representation 2002
*/
void image_derivatives_scharr(const cv::Mat& src, cv::Mat& dst, int xorder, int yorder) {
Scharr(src, dst, CV_32F, xorder, yorder, 1.0, 0, BORDER_DEFAULT);
}
/* ************************************************************************* */
/**
* @brief This function computes the Perona and Malik conductivity coefficient g1
* g1 = exp(-|dL|^2/k^2)
* @param _Lx First order image derivative in X-direction (horizontal)
* @param _Ly First order image derivative in Y-direction (vertical)
* @param _dst Output image
* @param k Contrast factor parameter
*/
void pm_g1(InputArray _Lx, InputArray _Ly, OutputArray _dst, float k) {
_dst.create(_Lx.size(), _Lx.type());
Mat Lx = _Lx.getMat();
Mat Ly = _Ly.getMat();
Mat dst = _dst.getMat();
Size sz = Lx.size();
float inv_k = 1.0f / (k*k);
for (int y = 0; y < sz.height; y++) {
const float* Lx_row = Lx.ptr<float>(y);
const float* Ly_row = Ly.ptr<float>(y);
float* dst_row = dst.ptr<float>(y);
for (int x = 0; x < sz.width; x++) {
dst_row[x] = (-inv_k*(Lx_row[x]*Lx_row[x] + Ly_row[x]*Ly_row[x]));
}
}
exp(dst, dst);
}
/* ************************************************************************* */
/**
* @brief This function computes the Perona and Malik conductivity coefficient g2
* g2 = 1 / (1 + dL^2 / k^2)
* @param _Lx First order image derivative in X-direction (horizontal)
* @param _Ly First order image derivative in Y-direction (vertical)
* @param _dst Output image
* @param k Contrast factor parameter
*/
void pm_g2(InputArray _Lx, InputArray _Ly, OutputArray _dst, float k) {
CV_INSTRUMENT_REGION();
_dst.create(_Lx.size(), _Lx.type());
Mat Lx = _Lx.getMat();
Mat Ly = _Ly.getMat();
Mat dst = _dst.getMat();
Size sz = Lx.size();
dst.create(sz, Lx.type());
float k2inv = 1.0f / (k * k);
for(int y = 0; y < sz.height; y++) {
const float *Lx_row = Lx.ptr<float>(y);
const float *Ly_row = Ly.ptr<float>(y);
float* dst_row = dst.ptr<float>(y);
for(int x = 0; x < sz.width; x++) {
dst_row[x] = 1.0f / (1.0f + ((Lx_row[x] * Lx_row[x] + Ly_row[x] * Ly_row[x]) * k2inv));
}
}
}
/* ************************************************************************* */
/**
* @brief This function computes Weickert conductivity coefficient gw
* @param _Lx First order image derivative in X-direction (horizontal)
* @param _Ly First order image derivative in Y-direction (vertical)
* @param _dst Output image
* @param k Contrast factor parameter
* @note For more information check the following paper: J. Weickert
* Applications of nonlinear diffusion in image processing and computer vision,
* Proceedings of Algorithmy 2000
*/
void weickert_diffusivity(InputArray _Lx, InputArray _Ly, OutputArray _dst, float k) {
_dst.create(_Lx.size(), _Lx.type());
Mat Lx = _Lx.getMat();
Mat Ly = _Ly.getMat();
Mat dst = _dst.getMat();
Size sz = Lx.size();
float inv_k = 1.0f / (k*k);
for (int y = 0; y < sz.height; y++) {
const float* Lx_row = Lx.ptr<float>(y);
const float* Ly_row = Ly.ptr<float>(y);
float* dst_row = dst.ptr<float>(y);
for (int x = 0; x < sz.width; x++) {
float dL = inv_k*(Lx_row[x]*Lx_row[x] + Ly_row[x]*Ly_row[x]);
dst_row[x] = -3.315f/(dL*dL*dL*dL);
}
}
exp(dst, dst);
dst = 1.0 - dst;
}
/* ************************************************************************* */
/**
* @brief This function computes Charbonnier conductivity coefficient gc
* gc = 1 / sqrt(1 + dL^2 / k^2)
* @param _Lx First order image derivative in X-direction (horizontal)
* @param _Ly First order image derivative in Y-direction (vertical)
* @param _dst Output image
* @param k Contrast factor parameter
* @note For more information check the following paper: J. Weickert
* Applications of nonlinear diffusion in image processing and computer vision,
* Proceedings of Algorithmy 2000
*/
void charbonnier_diffusivity(InputArray _Lx, InputArray _Ly, OutputArray _dst, float k) {
_dst.create(_Lx.size(), _Lx.type());
Mat Lx = _Lx.getMat();
Mat Ly = _Ly.getMat();
Mat dst = _dst.getMat();
Size sz = Lx.size();
float inv_k = 1.0f / (k*k);
for (int y = 0; y < sz.height; y++) {
const float* Lx_row = Lx.ptr<float>(y);
const float* Ly_row = Ly.ptr<float>(y);
float* dst_row = dst.ptr<float>(y);
for (int x = 0; x < sz.width; x++) {
float den = sqrt(1.0f+inv_k*(Lx_row[x]*Lx_row[x] + Ly_row[x]*Ly_row[x]));
dst_row[x] = 1.0f / den;
}
}
}
/* ************************************************************************* */
/**
* @brief This function computes a good empirical value for the k contrast factor
* given an input image, the percentile (0-1), the gradient scale and the number of
* bins in the histogram
* @param img Input image
* @param perc Percentile of the image gradient histogram (0-1)
* @param gscale Scale for computing the image gradient histogram
* @param nbins Number of histogram bins
* @param ksize_x Kernel size in X-direction (horizontal) for the Gaussian smoothing kernel
* @param ksize_y Kernel size in Y-direction (vertical) for the Gaussian smoothing kernel
* @return k contrast factor
*/
float compute_k_percentile(const cv::Mat& img, float perc, float gscale, int nbins, int ksize_x, int ksize_y) {
CV_INSTRUMENT_REGION();
int nbin = 0, nelements = 0, nthreshold = 0, k = 0;
float kperc = 0.0, modg = 0.0;
float npoints = 0.0;
float hmax = 0.0;
// Create the array for the histogram
std::vector<int> hist(nbins, 0);
// Create the matrices
Mat gaussian = Mat::zeros(img.rows, img.cols, CV_32F);
Mat Lx = Mat::zeros(img.rows, img.cols, CV_32F);
Mat Ly = Mat::zeros(img.rows, img.cols, CV_32F);
// Perform the Gaussian convolution
gaussian_2D_convolution(img, gaussian, ksize_x, ksize_y, gscale);
// Compute the Gaussian derivatives Lx and Ly
Scharr(gaussian, Lx, CV_32F, 1, 0, 1, 0, cv::BORDER_DEFAULT);
Scharr(gaussian, Ly, CV_32F, 0, 1, 1, 0, cv::BORDER_DEFAULT);
// Skip the borders for computing the histogram
for (int i = 1; i < gaussian.rows - 1; i++) {
const float *lx = Lx.ptr<float>(i);
const float *ly = Ly.ptr<float>(i);
for (int j = 1; j < gaussian.cols - 1; j++) {
modg = lx[j]*lx[j] + ly[j]*ly[j];
// Get the maximum
if (modg > hmax) {
hmax = modg;
}
}
}
hmax = sqrt(hmax);
// Skip the borders for computing the histogram
for (int i = 1; i < gaussian.rows - 1; i++) {
const float *lx = Lx.ptr<float>(i);
const float *ly = Ly.ptr<float>(i);
for (int j = 1; j < gaussian.cols - 1; j++) {
modg = lx[j]*lx[j] + ly[j]*ly[j];
// Find the correspondent bin
if (modg != 0.0) {
nbin = (int)floor(nbins*(sqrt(modg) / hmax));
if (nbin == nbins) {
nbin--;
}
hist[nbin]++;
npoints++;
}
}
}
// Now find the perc of the histogram percentile
nthreshold = (int)(npoints*perc);
for (k = 0; nelements < nthreshold && k < nbins; k++) {
nelements = nelements + hist[k];
}
if (nelements < nthreshold) {
kperc = 0.03f;
}
else {
kperc = hmax*((float)(k) / (float)nbins);
}
return kperc;
}
/* ************************************************************************* */
/**
* @brief This function computes Scharr image derivatives
* @param src Input image
* @param dst Output image
* @param xorder Derivative order in X-direction (horizontal)
* @param yorder Derivative order in Y-direction (vertical)
* @param scale Scale factor for the derivative size
*/
void compute_scharr_derivatives(const cv::Mat& src, cv::Mat& dst, int xorder, int yorder, int scale) {
Mat kx, ky;
compute_derivative_kernels(kx, ky, xorder, yorder, scale);
sepFilter2D(src, dst, CV_32F, kx, ky);
}
/* ************************************************************************* */
/**
* @brief Compute derivative kernels for sizes different than 3
* @param _kx Horizontal kernel ues
* @param _ky Vertical kernel values
* @param dx Derivative order in X-direction (horizontal)
* @param dy Derivative order in Y-direction (vertical)
* @param scale Scale factor or derivative size
*/
void compute_derivative_kernels(cv::OutputArray _kx, cv::OutputArray _ky, int dx, int dy, int scale) {
CV_INSTRUMENT_REGION();
int ksize = 3 + 2 * (scale - 1);
// The standard Scharr kernel
if (scale == 1) {
getDerivKernels(_kx, _ky, dx, dy, 0, true, CV_32F);
return;
}
_kx.create(ksize, 1, CV_32F, -1, true);
_ky.create(ksize, 1, CV_32F, -1, true);
Mat kx = _kx.getMat();
Mat ky = _ky.getMat();
std::vector<float> kerI;
float w = 10.0f / 3.0f;
float norm = 1.0f / (2.0f*scale*(w + 2.0f));
for (int k = 0; k < 2; k++) {
Mat* kernel = k == 0 ? &kx : &ky;
int order = k == 0 ? dx : dy;
kerI.assign(ksize, 0.0f);
if (order == 0) {
kerI[0] = norm, kerI[ksize / 2] = w*norm, kerI[ksize - 1] = norm;
}
else if (order == 1) {
kerI[0] = -1, kerI[ksize / 2] = 0, kerI[ksize - 1] = 1;
}
Mat temp(kernel->rows, kernel->cols, CV_32F, &kerI[0]);
temp.copyTo(*kernel);
}
}
class Nld_Step_Scalar_Invoker : public cv::ParallelLoopBody
{
public:
Nld_Step_Scalar_Invoker(cv::Mat& Ld, const cv::Mat& c, cv::Mat& Lstep, float _stepsize)
: _Ld(&Ld)
, _c(&c)
, _Lstep(&Lstep)
, stepsize(_stepsize)
{
}
virtual ~Nld_Step_Scalar_Invoker()
{
}
void operator()(const cv::Range& range) const CV_OVERRIDE
{
cv::Mat& Ld = *_Ld;
const cv::Mat& c = *_c;
cv::Mat& Lstep = *_Lstep;
for (int i = range.start; i < range.end; i++)
{
const float *c_prev = c.ptr<float>(i - 1);
const float *c_curr = c.ptr<float>(i);
const float *c_next = c.ptr<float>(i + 1);
const float *ld_prev = Ld.ptr<float>(i - 1);
const float *ld_curr = Ld.ptr<float>(i);
const float *ld_next = Ld.ptr<float>(i + 1);
float *dst = Lstep.ptr<float>(i);
for (int j = 1; j < Lstep.cols - 1; j++)
{
float xpos = (c_curr[j] + c_curr[j+1])*(ld_curr[j+1] - ld_curr[j]);
float xneg = (c_curr[j-1] + c_curr[j]) *(ld_curr[j] - ld_curr[j-1]);
float ypos = (c_curr[j] + c_next[j]) *(ld_next[j] - ld_curr[j]);
float yneg = (c_prev[j] + c_curr[j]) *(ld_curr[j] - ld_prev[j]);
dst[j] = 0.5f*stepsize*(xpos - xneg + ypos - yneg);
}
}
}
private:
cv::Mat * _Ld;
const cv::Mat * _c;
cv::Mat * _Lstep;
float stepsize;
};
/* ************************************************************************* */
/**
* @brief This function performs a scalar non-linear diffusion step
* @param Ld Output image in the evolution
* @param c Conductivity image
* @param Lstep Previous image in the evolution
* @param stepsize The step size in time units
* @note Forward Euler Scheme 3x3 stencil
* The function c is a scalar value that depends on the gradient norm
* dL_by_ds = d(c dL_by_dx)_by_dx + d(c dL_by_dy)_by_dy
*/
void nld_step_scalar(cv::Mat& Ld, const cv::Mat& c, cv::Mat& Lstep, float stepsize) {
CV_INSTRUMENT_REGION();
cv::parallel_for_(cv::Range(1, Lstep.rows - 1), Nld_Step_Scalar_Invoker(Ld, c, Lstep, stepsize), (double)Ld.total()/(1 << 16));
float xneg, xpos, yneg, ypos;
float* dst = Lstep.ptr<float>(0);
const float* cprv = NULL;
const float* ccur = c.ptr<float>(0);
const float* cnxt = c.ptr<float>(1);
const float* ldprv = NULL;
const float* ldcur = Ld.ptr<float>(0);
const float* ldnxt = Ld.ptr<float>(1);
for (int j = 1; j < Lstep.cols - 1; j++) {
xpos = (ccur[j] + ccur[j+1]) * (ldcur[j+1] - ldcur[j]);
xneg = (ccur[j-1] + ccur[j]) * (ldcur[j] - ldcur[j-1]);
ypos = (ccur[j] + cnxt[j]) * (ldnxt[j] - ldcur[j]);
dst[j] = 0.5f*stepsize*(xpos - xneg + ypos);
}
dst = Lstep.ptr<float>(Lstep.rows - 1);
ccur = c.ptr<float>(Lstep.rows - 1);
cprv = c.ptr<float>(Lstep.rows - 2);
ldcur = Ld.ptr<float>(Lstep.rows - 1);
ldprv = Ld.ptr<float>(Lstep.rows - 2);
for (int j = 1; j < Lstep.cols - 1; j++) {
xpos = (ccur[j] + ccur[j+1]) * (ldcur[j+1] - ldcur[j]);
xneg = (ccur[j-1] + ccur[j]) * (ldcur[j] - ldcur[j-1]);
yneg = (cprv[j] + ccur[j]) * (ldcur[j] - ldprv[j]);
dst[j] = 0.5f*stepsize*(xpos - xneg - yneg);
}
ccur = c.ptr<float>(1);
ldcur = Ld.ptr<float>(1);
cprv = c.ptr<float>(0);
ldprv = Ld.ptr<float>(0);
int r0 = Lstep.cols - 1;
int r1 = Lstep.cols - 2;
for (int i = 1; i < Lstep.rows - 1; i++) {
cnxt = c.ptr<float>(i + 1);
ldnxt = Ld.ptr<float>(i + 1);
dst = Lstep.ptr<float>(i);
xpos = (ccur[0] + ccur[1]) * (ldcur[1] - ldcur[0]);
ypos = (ccur[0] + cnxt[0]) * (ldnxt[0] - ldcur[0]);
yneg = (cprv[0] + ccur[0]) * (ldcur[0] - ldprv[0]);
dst[0] = 0.5f*stepsize*(xpos + ypos - yneg);
xneg = (ccur[r1] + ccur[r0]) * (ldcur[r0] - ldcur[r1]);
ypos = (ccur[r0] + cnxt[r0]) * (ldnxt[r0] - ldcur[r0]);
yneg = (cprv[r0] + ccur[r0]) * (ldcur[r0] - ldprv[r0]);
dst[r0] = 0.5f*stepsize*(-xneg + ypos - yneg);
cprv = ccur;
ccur = cnxt;
ldprv = ldcur;
ldcur = ldnxt;
}
Ld += Lstep;
}
/* ************************************************************************* */
/**
* @brief This function downsamples the input image using OpenCV resize
* @param src Input image to be downsampled
* @param dst Output image with half of the resolution of the input image
*/
void halfsample_image(const cv::Mat& src, cv::Mat& dst) {
// Make sure the destination image is of the right size
CV_Assert(src.cols / 2 == dst.cols);
CV_Assert(src.rows / 2 == dst.rows);
resize(src, dst, dst.size(), 0, 0, cv::INTER_AREA);
}
/* ************************************************************************* */
/**
* @brief This function checks if a given pixel is a maximum in a local neighbourhood
* @param img Input image where we will perform the maximum search
* @param dsize Half size of the neighbourhood
* @param value Response value at (x,y) position
* @param row Image row coordinate
* @param col Image column coordinate
* @param same_img Flag to indicate if the image value at (x,y) is in the input image
* @return 1->is maximum, 0->otherwise
*/
bool check_maximum_neighbourhood(const cv::Mat& img, int dsize, float value, int row, int col, bool same_img) {
bool response = true;
for (int i = row - dsize; i <= row + dsize; i++) {
for (int j = col - dsize; j <= col + dsize; j++) {
if (i >= 0 && i < img.rows && j >= 0 && j < img.cols) {
if (same_img == true) {
if (i != row || j != col) {
if ((*(img.ptr<float>(i)+j)) > value) {
response = false;
return response;
}
}
}
else {
if ((*(img.ptr<float>(i)+j)) > value) {
response = false;
return response;
}
}
}
}
}
return response;
}
}

View File

@ -1,47 +0,0 @@
/**
* @file nldiffusion_functions.h
* @brief Functions for non-linear diffusion applications:
* 2D Gaussian Derivatives
* Perona and Malik conductivity equations
* Perona and Malik evolution
* @date Dec 27, 2011
* @author Pablo F. Alcantarilla
*/
#ifndef __OPENCV_FEATURES_2D_NLDIFFUSION_FUNCTIONS_H__
#define __OPENCV_FEATURES_2D_NLDIFFUSION_FUNCTIONS_H__
/* ************************************************************************* */
// Declaration of functions
namespace cv
{
// Gaussian 2D convolution
void gaussian_2D_convolution(const cv::Mat& src, cv::Mat& dst, int ksize_x, int ksize_y, float sigma);
// Diffusivity functions
void pm_g1(InputArray Lx, InputArray Ly, OutputArray dst, float k);
void pm_g2(InputArray Lx, InputArray Ly, OutputArray dst, float k);
void weickert_diffusivity(InputArray Lx, InputArray Ly, OutputArray dst, float k);
void charbonnier_diffusivity(InputArray Lx, InputArray Ly, OutputArray dst, float k);
float compute_k_percentile(const cv::Mat& img, float perc, float gscale, int nbins, int ksize_x, int ksize_y);
// Image derivatives
void compute_scharr_derivatives(const cv::Mat& src, cv::Mat& dst, int xorder, int yorder, int scale);
void compute_derivative_kernels(cv::OutputArray _kx, cv::OutputArray _ky, int dx, int dy, int scale);
void image_derivatives_scharr(const cv::Mat& src, cv::Mat& dst, int xorder, int yorder);
// Nonlinear diffusion filtering scalar step
void nld_step_scalar(cv::Mat& Ld, const cv::Mat& c, cv::Mat& Lstep, float stepsize);
// For non-maxima suppression
bool check_maximum_neighbourhood(const cv::Mat& img, int dsize, float value, int row, int col, bool same_img);
// Image downsampling
void halfsample_image(const cv::Mat& src, cv::Mat& dst);
}
#endif

View File

@ -1,42 +0,0 @@
#ifndef __OPENCV_FEATURES_2D_KAZE_UTILS_H__
#define __OPENCV_FEATURES_2D_KAZE_UTILS_H__
/* ************************************************************************* */
/**
* @brief This function computes the value of a 2D Gaussian function
* @param x X Position
* @param y Y Position
* @param sigma Standard Deviation
*/
inline float gaussian(float x, float y, float sigma) {
return expf(-(x*x + y*y) / (2.0f*sigma*sigma));
}
/* ************************************************************************* */
/**
* @brief This function checks descriptor limits
* @param x X Position
* @param y Y Position
* @param width Image width
* @param height Image height
*/
inline void checkDescriptorLimits(int &x, int &y, int width, int height) {
if (x < 0) {
x = 0;
}
if (y < 0) {
y = 0;
}
if (x > width - 1) {
x = width - 1;
}
if (y > height - 1) {
y = height - 1;
}
}
#endif

View File

@ -1,122 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html
/**
* @brief This function computes the Perona and Malik conductivity coefficient g2
* g2 = 1 / (1 + dL^2 / k^2)
* @param lx First order image derivative in X-direction (horizontal)
* @param ly First order image derivative in Y-direction (vertical)
* @param dst Output image
* @param k Contrast factor parameter
*/
__kernel void
AKAZE_pm_g2(__global const float* lx, __global const float* ly, __global float* dst,
float k, int size)
{
int i = get_global_id(0);
// OpenCV plays with dimensions so we need explicit check for this
if (!(i < size))
{
return;
}
const float k2inv = 1.0f / (k * k);
dst[i] = 1.0f / (1.0f + ((lx[i] * lx[i] + ly[i] * ly[i]) * k2inv));
}
__kernel void
AKAZE_nld_step_scalar(__global const float* lt, int lt_step, int lt_offset, int rows, int cols,
__global const float* lf, __global float* dst, float step_size)
{
/* The labeling scheme for this five star stencil:
[ a ]
[ -1 c +1 ]
[ b ]
*/
// column-first indexing
int i = get_global_id(1);
int j = get_global_id(0);
// OpenCV plays with dimensions so we need explicit check for this
if (!(i < rows && j < cols))
{
return;
}
// get row indexes
int a = (i - 1) * cols;
int c = (i ) * cols;
int b = (i + 1) * cols;
// compute stencil
float res = 0.0f;
if (i == 0) // first rows
{
if (j == 0 || j == (cols - 1))
{
res = 0.0f;
} else
{
res = (lf[c + j] + lf[c + j + 1])*(lt[c + j + 1] - lt[c + j]) +
(lf[c + j] + lf[c + j - 1])*(lt[c + j - 1] - lt[c + j]) +
(lf[c + j] + lf[b + j ])*(lt[b + j ] - lt[c + j]);
}
} else if (i == (rows - 1)) // last row
{
if (j == 0 || j == (cols - 1))
{
res = 0.0f;
} else
{
res = (lf[c + j] + lf[c + j + 1])*(lt[c + j + 1] - lt[c + j]) +
(lf[c + j] + lf[c + j - 1])*(lt[c + j - 1] - lt[c + j]) +
(lf[c + j] + lf[a + j ])*(lt[a + j ] - lt[c + j]);
}
} else // inner rows
{
if (j == 0) // first column
{
res = (lf[c + 0] + lf[c + 1])*(lt[c + 1] - lt[c + 0]) +
(lf[c + 0] + lf[b + 0])*(lt[b + 0] - lt[c + 0]) +
(lf[c + 0] + lf[a + 0])*(lt[a + 0] - lt[c + 0]);
} else if (j == (cols - 1)) // last column
{
res = (lf[c + j] + lf[c + j - 1])*(lt[c + j - 1] - lt[c + j]) +
(lf[c + j] + lf[b + j ])*(lt[b + j ] - lt[c + j]) +
(lf[c + j] + lf[a + j ])*(lt[a + j ] - lt[c + j]);
} else // inner stencil
{
res = (lf[c + j] + lf[c + j + 1])*(lt[c + j + 1] - lt[c + j]) +
(lf[c + j] + lf[c + j - 1])*(lt[c + j - 1] - lt[c + j]) +
(lf[c + j] + lf[b + j ])*(lt[b + j ] - lt[c + j]) +
(lf[c + j] + lf[a + j ])*(lt[a + j ] - lt[c + j]);
}
}
dst[c + j] = res * step_size;
}
/**
* @brief Compute determinant from hessians
* @details Compute Ldet by (Lxx.mul(Lyy) - Lxy.mul(Lxy)) * sigma
*
* @param lxx spatial derivates
* @param lxy spatial derivates
* @param lyy spatial derivates
* @param dst output determinant
* @param sigma determinant will be scaled by this sigma
*/
__kernel void
AKAZE_compute_determinant(__global const float* lxx, __global const float* lxy, __global const float* lyy,
__global float* dst, float sigma, int size)
{
int i = get_global_id(0);
// OpenCV plays with dimensions so we need explicit check for this
if (!(i < size))
{
return;
}
dst[i] = (lxx[i] * lyy[i] - lxy[i] * lxy[i]) * sigma;
}

View File

@ -1,73 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html
#include "../test_precomp.hpp"
#include "cvconfig.h"
#include "opencv2/ts/ocl_test.hpp"
#include <functional>
#ifdef HAVE_OPENCL
namespace opencv_test {
namespace ocl {
#define TEST_IMAGES testing::Values(\
"detectors_descriptors_evaluation/images_datasets/leuven/img1.png",\
"../stitching/a3.png", \
"../stitching/s2.jpg")
PARAM_TEST_CASE(Feature2DFixture, std::function<Ptr<Feature2D>()>, std::string)
{
std::string filename;
Mat image, descriptors;
vector<KeyPoint> keypoints;
UMat uimage, udescriptors;
vector<KeyPoint> ukeypoints;
Ptr<Feature2D> feature;
virtual void SetUp()
{
feature = GET_PARAM(0)();
filename = GET_PARAM(1);
image = readImage(filename);
ASSERT_FALSE(image.empty());
image.copyTo(uimage);
OCL_OFF(feature->detect(image, keypoints));
OCL_ON(feature->detect(uimage, ukeypoints));
// note: we use keypoints from CPU for GPU too, to test descriptors separately
OCL_OFF(feature->compute(image, keypoints, descriptors));
OCL_ON(feature->compute(uimage, keypoints, udescriptors));
}
};
OCL_TEST_P(Feature2DFixture, KeypointsSame)
{
EXPECT_EQ(keypoints.size(), ukeypoints.size());
for (size_t i = 0; i < keypoints.size(); ++i)
{
EXPECT_GE(KeyPoint::overlap(keypoints[i], ukeypoints[i]), 0.95);
EXPECT_NEAR(keypoints[i].angle, ukeypoints[i].angle, 0.05);
}
}
OCL_TEST_P(Feature2DFixture, DescriptorsSame)
{
EXPECT_MAT_NEAR(descriptors, udescriptors, 0.001);
}
OCL_INSTANTIATE_TEST_CASE_P(AKAZE, Feature2DFixture,
testing::Combine(testing::Values([]() { return AKAZE::create(); }), TEST_IMAGES));
OCL_INSTANTIATE_TEST_CASE_P(AKAZE_DESCRIPTOR_KAZE, Feature2DFixture,
testing::Combine(testing::Values([]() { return AKAZE::create(AKAZE::DESCRIPTOR_KAZE); }), TEST_IMAGES));
}//ocl
}//cvtest
#endif //HAVE_OPENCL

View File

@ -1,138 +0,0 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include "test_precomp.hpp"
namespace opencv_test { namespace {
class CV_AgastTest : public cvtest::BaseTest
{
public:
CV_AgastTest();
~CV_AgastTest();
protected:
void run(int);
};
CV_AgastTest::CV_AgastTest() {}
CV_AgastTest::~CV_AgastTest() {}
void CV_AgastTest::run( int )
{
for(int type=0; type <= 2; ++type) {
Mat image1 = imread(string(ts->get_data_path()) + "inpaint/orig.png");
Mat image2 = imread(string(ts->get_data_path()) + "cameracalibration/chess9.png");
string xml = string(ts->get_data_path()) + format("agast/result%d.xml", type);
if (image1.empty() || image2.empty())
{
ts->set_failed_test_info( cvtest::TS::FAIL_INVALID_TEST_DATA );
return;
}
Mat gray1, gray2;
cvtColor(image1, gray1, COLOR_BGR2GRAY);
cvtColor(image2, gray2, COLOR_BGR2GRAY);
vector<KeyPoint> keypoints1;
vector<KeyPoint> keypoints2;
AGAST(gray1, keypoints1, 30, true, static_cast<AgastFeatureDetector::DetectorType>(type));
AGAST(gray2, keypoints2, (type > 0 ? 30 : 20), true, static_cast<AgastFeatureDetector::DetectorType>(type));
for(size_t i = 0; i < keypoints1.size(); ++i)
{
const KeyPoint& kp = keypoints1[i];
cv::circle(image1, kp.pt, cvRound(kp.size/2), Scalar(255, 0, 0));
}
for(size_t i = 0; i < keypoints2.size(); ++i)
{
const KeyPoint& kp = keypoints2[i];
cv::circle(image2, kp.pt, cvRound(kp.size/2), Scalar(255, 0, 0));
}
Mat kps1(1, (int)(keypoints1.size() * sizeof(KeyPoint)), CV_8U, &keypoints1[0]);
Mat kps2(1, (int)(keypoints2.size() * sizeof(KeyPoint)), CV_8U, &keypoints2[0]);
FileStorage fs(xml, FileStorage::READ);
if (!fs.isOpened())
{
fs.open(xml, FileStorage::WRITE);
if (!fs.isOpened())
{
ts->set_failed_test_info(cvtest::TS::FAIL_INVALID_TEST_DATA);
return;
}
fs << "exp_kps1" << kps1;
fs << "exp_kps2" << kps2;
fs.release();
fs.open(xml, FileStorage::READ);
if (!fs.isOpened())
{
ts->set_failed_test_info(cvtest::TS::FAIL_INVALID_TEST_DATA);
return;
}
}
Mat exp_kps1, exp_kps2;
read( fs["exp_kps1"], exp_kps1, Mat() );
read( fs["exp_kps2"], exp_kps2, Mat() );
fs.release();
if ( exp_kps1.size != kps1.size || 0 != cvtest::norm(exp_kps1, kps1, NORM_L2) ||
exp_kps2.size != kps2.size || 0 != cvtest::norm(exp_kps2, kps2, NORM_L2))
{
ts->set_failed_test_info(cvtest::TS::FAIL_MISMATCH);
return;
}
/*cv::namedWindow("Img1"); cv::imshow("Img1", image1);
cv::namedWindow("Img2"); cv::imshow("Img2", image2);
cv::waitKey(0);*/
}
ts->set_failed_test_info(cvtest::TS::OK);
}
TEST(Features2d_AGAST, regression) { CV_AgastTest test; test.safe_run(); }
}} // namespace

View File

@ -1,48 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html
#include "test_precomp.hpp"
namespace opencv_test { namespace {
TEST(Features2d_AKAZE, detect_and_compute_split)
{
Mat testImg(100, 100, CV_8U);
RNG rng(101);
rng.fill(testImg, RNG::UNIFORM, Scalar(0), Scalar(255), true);
Ptr<Feature2D> ext = AKAZE::create(AKAZE::DESCRIPTOR_MLDB, 0, 3, 0.001f, 1, 1, KAZE::DIFF_PM_G2);
vector<KeyPoint> detAndCompKps;
Mat desc;
ext->detectAndCompute(testImg, noArray(), detAndCompKps, desc);
vector<KeyPoint> detKps;
ext->detect(testImg, detKps);
ASSERT_EQ(detKps.size(), detAndCompKps.size());
for(size_t i = 0; i < detKps.size(); i++)
ASSERT_EQ(detKps[i].hash(), detAndCompKps[i].hash());
}
/**
* This test is here to guard propagation of NaNs that happens on this image. NaNs are guarded
* by debug asserts in AKAZE, which should fire for you if you are lucky.
*
* This test also reveals problems with uninitialized memory that happens only on this image.
* This is very hard to hit and depends a lot on particular allocator. Run this test in valgrind and check
* for uninitialized values if you think you are hitting this problem again.
*/
TEST(Features2d_AKAZE, uninitialized_and_nans)
{
Mat b1 = imread(cvtest::TS::ptr()->get_data_path() + "../stitching/b1.png");
ASSERT_FALSE(b1.empty());
vector<KeyPoint> keypoints;
Mat desc;
Ptr<Feature2D> akaze = AKAZE::create();
akaze->detectAndCompute(b1, noArray(), keypoints, desc);
}
}} // namespace

View File

@ -1,108 +0,0 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include "test_precomp.hpp"
namespace opencv_test { namespace {
class CV_BRISKTest : public cvtest::BaseTest
{
public:
CV_BRISKTest();
~CV_BRISKTest();
protected:
void run(int);
};
CV_BRISKTest::CV_BRISKTest() {}
CV_BRISKTest::~CV_BRISKTest() {}
void CV_BRISKTest::run( int )
{
Mat image1 = imread(string(ts->get_data_path()) + "inpaint/orig.png");
Mat image2 = imread(string(ts->get_data_path()) + "cameracalibration/chess9.png");
if (image1.empty() || image2.empty())
{
ts->set_failed_test_info( cvtest::TS::FAIL_INVALID_TEST_DATA );
return;
}
Mat gray1, gray2;
cvtColor(image1, gray1, COLOR_BGR2GRAY);
cvtColor(image2, gray2, COLOR_BGR2GRAY);
Ptr<FeatureDetector> detector = BRISK::create();
// Check parameter get/set functions.
BRISK* detectorTyped = dynamic_cast<BRISK*>(detector.get());
ASSERT_NE(nullptr, detectorTyped);
detectorTyped->setOctaves(3);
detectorTyped->setThreshold(30);
ASSERT_EQ(detectorTyped->getOctaves(), 3);
ASSERT_EQ(detectorTyped->getThreshold(), 30);
detectorTyped->setOctaves(4);
detectorTyped->setThreshold(29);
ASSERT_EQ(detectorTyped->getOctaves(), 4);
ASSERT_EQ(detectorTyped->getThreshold(), 29);
vector<KeyPoint> keypoints1;
vector<KeyPoint> keypoints2;
detector->detect(image1, keypoints1);
detector->detect(image2, keypoints2);
for(size_t i = 0; i < keypoints1.size(); ++i)
{
const KeyPoint& kp = keypoints1[i];
ASSERT_NE(kp.angle, -1);
}
for(size_t i = 0; i < keypoints2.size(); ++i)
{
const KeyPoint& kp = keypoints2[i];
ASSERT_NE(kp.angle, -1);
}
}
TEST(Features2d_BRISK, regression) { CV_BRISKTest test; test.safe_run(); }
}} // namespace

View File

@ -20,17 +20,9 @@ const static std::string IMAGE_BIKES = "detectors_descriptors_evaluation/images_
INSTANTIATE_TEST_CASE_P(SIFT, DescriptorRotationInvariance,
Value(IMAGE_TSUKUBA, []() { return SIFT::create(); }, []() { return SIFT::create(); }, 0.98f));
INSTANTIATE_TEST_CASE_P(BRISK, DescriptorRotationInvariance,
Value(IMAGE_TSUKUBA, []() { return BRISK::create(); }, []() { return BRISK::create(); }, 0.99f));
INSTANTIATE_TEST_CASE_P(ORB, DescriptorRotationInvariance,
Value(IMAGE_TSUKUBA, []() { return ORB::create(); }, []() { return ORB::create(); }, 0.99f));
INSTANTIATE_TEST_CASE_P(AKAZE, DescriptorRotationInvariance,
Value(IMAGE_TSUKUBA, []() { return AKAZE::create(); }, []() { return AKAZE::create(); }, 0.99f));
INSTANTIATE_TEST_CASE_P(AKAZE_DESCRIPTOR_KAZE, DescriptorRotationInvariance,
Value(IMAGE_TSUKUBA, []() { return AKAZE::create(AKAZE::DESCRIPTOR_KAZE); }, []() { return AKAZE::create(AKAZE::DESCRIPTOR_KAZE); }, 0.99f));
/*
* Descriptor's scale invariance check
@ -39,10 +31,4 @@ INSTANTIATE_TEST_CASE_P(AKAZE_DESCRIPTOR_KAZE, DescriptorRotationInvariance,
INSTANTIATE_TEST_CASE_P(SIFT, DescriptorScaleInvariance,
Value(IMAGE_BIKES, []() { return SIFT::create(0, 3, 0.09); }, []() { return SIFT::create(0, 3, 0.09); }, 0.78f));
INSTANTIATE_TEST_CASE_P(AKAZE, DescriptorScaleInvariance,
Value(IMAGE_BIKES, []() { return AKAZE::create(); }, []() { return AKAZE::create(); }, 0.6f));
INSTANTIATE_TEST_CASE_P(AKAZE_DESCRIPTOR_KAZE, DescriptorScaleInvariance,
Value(IMAGE_BIKES, []() { return AKAZE::create(AKAZE::DESCRIPTOR_KAZE); }, []() { return AKAZE::create(AKAZE::DESCRIPTOR_KAZE); }, 0.55f));
}} // namespace

View File

@ -25,14 +25,6 @@ TEST( Features2d_DescriptorExtractor_SIFT, regression )
test.safe_run();
}
TEST( Features2d_DescriptorExtractor_BRISK, regression )
{
CV_DescriptorExtractorTest<Hamming> test( "descriptor-brisk",
(CV_DescriptorExtractorTest<Hamming>::DistanceType)2.f,
BRISK::create() );
test.safe_run();
}
TEST( Features2d_DescriptorExtractor_ORB, regression )
{
// TODO adjust the parameters below
@ -46,31 +38,6 @@ TEST( Features2d_DescriptorExtractor_ORB, regression )
test.safe_run();
}
TEST( Features2d_DescriptorExtractor_KAZE, regression )
{
CV_DescriptorExtractorTest< L2<float> > test( "descriptor-kaze", 0.03f,
KAZE::create(),
L2<float>(), KAZE::create() );
test.safe_run();
}
TEST( Features2d_DescriptorExtractor_AKAZE, regression )
{
CV_DescriptorExtractorTest<Hamming> test( "descriptor-akaze",
(CV_DescriptorExtractorTest<Hamming>::DistanceType)(486*0.05f),
AKAZE::create(),
Hamming(), AKAZE::create());
test.safe_run();
}
TEST( Features2d_DescriptorExtractor_AKAZE_DESCRIPTOR_KAZE, regression )
{
CV_DescriptorExtractorTest< L2<float> > test( "descriptor-akaze-with-kaze-desc", 0.03f,
AKAZE::create(AKAZE::DESCRIPTOR_KAZE),
L2<float>(), AKAZE::create(AKAZE::DESCRIPTOR_KAZE));
test.safe_run();
}
TEST( Features2d_DescriptorExtractor, batch_ORB )
{
string path = string(cvtest::TS::ptr()->get_data_path() + "detectors_descriptors_evaluation/images_datasets/graf");
@ -144,15 +111,7 @@ TEST_P(DescriptorImage, no_crash)
glob(cvtest::TS::ptr()->get_data_path() + pattern, fnames, false);
std::sort(fnames.begin(), fnames.end());
Ptr<AKAZE> akaze_mldb = AKAZE::create(AKAZE::DESCRIPTOR_MLDB);
Ptr<AKAZE> akaze_mldb_upright = AKAZE::create(AKAZE::DESCRIPTOR_MLDB_UPRIGHT);
Ptr<AKAZE> akaze_mldb_256 = AKAZE::create(AKAZE::DESCRIPTOR_MLDB, 256);
Ptr<AKAZE> akaze_mldb_upright_256 = AKAZE::create(AKAZE::DESCRIPTOR_MLDB_UPRIGHT, 256);
Ptr<AKAZE> akaze_kaze = AKAZE::create(AKAZE::DESCRIPTOR_KAZE);
Ptr<AKAZE> akaze_kaze_upright = AKAZE::create(AKAZE::DESCRIPTOR_KAZE_UPRIGHT);
Ptr<ORB> orb = ORB::create();
Ptr<KAZE> kaze = KAZE::create();
Ptr<BRISK> brisk = BRISK::create();
size_t n = fnames.size();
vector<KeyPoint> keypoints;
Mat descriptors;
@ -183,15 +142,7 @@ TEST_P(DescriptorImage, no_crash)
} \
ASSERT_EQ(descriptors.rows, (int)keypoints.size());
TEST_DETECTOR("AKAZE:MLDB", akaze_mldb);
TEST_DETECTOR("AKAZE:MLDB_UPRIGHT", akaze_mldb_upright);
TEST_DETECTOR("AKAZE:MLDB_256", akaze_mldb_256);
TEST_DETECTOR("AKAZE:MLDB_UPRIGHT_256", akaze_mldb_upright_256);
TEST_DETECTOR("AKAZE:KAZE", akaze_kaze);
TEST_DETECTOR("AKAZE:KAZE_UPRIGHT", akaze_kaze_upright);
TEST_DETECTOR("KAZE", kaze);
TEST_DETECTOR("ORB", orb);
TEST_DETECTOR("BRISK", brisk);
}
}

View File

@ -20,17 +20,9 @@ const static std::string IMAGE_BIKES = "detectors_descriptors_evaluation/images_
INSTANTIATE_TEST_CASE_P(SIFT, DetectorRotationInvariance,
Value(IMAGE_TSUKUBA, []() { return SIFT::create(); }, 0.45f, 0.70f));
INSTANTIATE_TEST_CASE_P(BRISK, DetectorRotationInvariance,
Value(IMAGE_TSUKUBA, []() { return BRISK::create(); }, 0.45f, 0.76f));
INSTANTIATE_TEST_CASE_P(ORB, DetectorRotationInvariance,
Value(IMAGE_TSUKUBA, []() { return ORB::create(); }, 0.5f, 0.76f));
INSTANTIATE_TEST_CASE_P(AKAZE, DetectorRotationInvariance,
Value(IMAGE_TSUKUBA, []() { return AKAZE::create(); }, 0.5f, 0.71f));
INSTANTIATE_TEST_CASE_P(AKAZE_DESCRIPTOR_KAZE, DetectorRotationInvariance,
Value(IMAGE_TSUKUBA, []() { return AKAZE::create(AKAZE::DESCRIPTOR_KAZE); }, 0.5f, 0.71f));
/*
* Detector's scale invariance check
@ -39,19 +31,7 @@ INSTANTIATE_TEST_CASE_P(AKAZE_DESCRIPTOR_KAZE, DetectorRotationInvariance,
INSTANTIATE_TEST_CASE_P(SIFT, DetectorScaleInvariance,
Value(IMAGE_BIKES, []() { return SIFT::create(0, 3, 0.09); }, 0.60f, 0.98f));
INSTANTIATE_TEST_CASE_P(BRISK, DetectorScaleInvariance,
Value(IMAGE_BIKES, []() { return BRISK::create(); }, 0.08f, 0.49f));
INSTANTIATE_TEST_CASE_P(ORB, DetectorScaleInvariance,
Value(IMAGE_BIKES, []() { return ORB::create(); }, 0.08f, 0.49f));
INSTANTIATE_TEST_CASE_P(KAZE, DetectorScaleInvariance,
Value(IMAGE_BIKES, []() { return KAZE::create(); }, 0.08f, 0.49f));
INSTANTIATE_TEST_CASE_P(AKAZE, DetectorScaleInvariance,
Value(IMAGE_BIKES, []() { return AKAZE::create(); }, 0.08f, 0.49f));
INSTANTIATE_TEST_CASE_P(AKAZE_DESCRIPTOR_KAZE, DetectorScaleInvariance,
Value(IMAGE_BIKES, []() { return AKAZE::create(AKAZE::DESCRIPTOR_KAZE); }, 0.08f, 0.49f));
}} // namespace

View File

@ -24,24 +24,12 @@ TEST( Features2d_Detector_SIFT, regression )
test.safe_run();
}
TEST( Features2d_Detector_BRISK, regression )
{
CV_FeatureDetectorTest test( "detector-brisk", BRISK::create() );
test.safe_run();
}
TEST( Features2d_Detector_FAST, regression )
{
CV_FeatureDetectorTest test( "detector-fast", FastFeatureDetector::create() );
test.safe_run();
}
TEST( Features2d_Detector_AGAST, regression )
{
CV_FeatureDetectorTest test( "detector-agast", AgastFeatureDetector::create() );
test.safe_run();
}
TEST( Features2d_Detector_GFTT, regression )
{
CV_FeatureDetectorTest test( "detector-gftt", GFTTDetector::create() );
@ -68,22 +56,4 @@ TEST( Features2d_Detector_ORB, regression )
test.safe_run();
}
TEST( Features2d_Detector_KAZE, regression )
{
CV_FeatureDetectorTest test( "detector-kaze", KAZE::create() );
test.safe_run();
}
TEST( Features2d_Detector_AKAZE, regression )
{
CV_FeatureDetectorTest test( "detector-akaze", AKAZE::create() );
test.safe_run();
}
TEST( Features2d_Detector_AKAZE_DESCRIPTOR_KAZE, regression )
{
CV_FeatureDetectorTest test( "detector-akaze-with-kaze-desc", AKAZE::create(AKAZE::DESCRIPTOR_KAZE) );
test.safe_run();
}
}} // namespace

View File

@ -115,25 +115,12 @@ protected:
// Registration of tests
TEST(Features2d_Detector_Keypoints_BRISK, validation)
{
CV_FeatureDetectorKeypointsTest test(BRISK::create());
test.safe_run();
}
TEST(Features2d_Detector_Keypoints_FAST, validation)
{
CV_FeatureDetectorKeypointsTest test(FastFeatureDetector::create());
test.safe_run();
}
TEST(Features2d_Detector_Keypoints_AGAST, validation)
{
CV_FeatureDetectorKeypointsTest test(AgastFeatureDetector::create());
test.safe_run();
}
TEST(Features2d_Detector_Keypoints_HARRIS, validation)
{
@ -161,21 +148,6 @@ TEST(Features2d_Detector_Keypoints_ORB, validation)
test.safe_run();
}
TEST(Features2d_Detector_Keypoints_KAZE, validation)
{
CV_FeatureDetectorKeypointsTest test(KAZE::create());
test.safe_run();
}
TEST(Features2d_Detector_Keypoints_AKAZE, validation)
{
CV_FeatureDetectorKeypointsTest test_kaze(AKAZE::create(AKAZE::DESCRIPTOR_KAZE));
test_kaze.safe_run();
CV_FeatureDetectorKeypointsTest test_mldb(AKAZE::create(AKAZE::DESCRIPTOR_MLDB));
test_mldb.safe_run();
}
TEST(Features2d_Detector_Keypoints_SIFT, validation)
{
CV_FeatureDetectorKeypointsTest test(SIFT::create());

View File

@ -35,29 +35,13 @@ QUnit.test('Detectors', function(assert) {
assert.equal(kp.size(), 7, 'MSER');
*/
let brisk = new cv.BRISK();
brisk.detect(image, kp);
assert.equal(kp.size(), 191, 'BRISK');
let ffd = new cv.FastFeatureDetector();
ffd.detect(image, kp);
assert.equal(kp.size(), 12, 'FastFeatureDetector');
let afd = new cv.AgastFeatureDetector();
afd.detect(image, kp);
assert.equal(kp.size(), 67, 'AgastFeatureDetector');
let gftt = new cv.GFTTDetector();
gftt.detect(image, kp);
assert.equal(kp.size(), 168, 'GFTTDetector');
let kaze = new cv.KAZE();
kaze.detect(image, kp);
assert.equal(kp.size(), 159, 'KAZE');
let akaze = new cv.AKAZE();
akaze.detect(image, kp);
assert.equal(kp.size(), 53, 'AKAZE');
});
QUnit.test('SimpleBlobDetector', function(assert) {

View File

@ -12,16 +12,16 @@ class algorithm_rw_test(NewOpenCVTests):
os.close(fd)
# some arbitrary non-default parameters
gold = cv.AKAZE_create(descriptor_size=1, descriptor_channels=2, nOctaves=3, threshold=4.0)
gold.write(cv.FileStorage(fname, cv.FILE_STORAGE_WRITE), "AKAZE")
gold = cv.ORB_create(nfeatures=200, scaleFactor=1.3, nlevels=5, edgeThreshold=28)
gold.write(cv.FileStorage(fname, cv.FILE_STORAGE_WRITE), "ORB")
fs = cv.FileStorage(fname, cv.FILE_STORAGE_READ)
algorithm = cv.AKAZE_create()
algorithm.read(fs.getNode("AKAZE"))
algorithm = cv.ORB_create()
algorithm.read(fs.getNode("ORB"))
self.assertEqual(algorithm.getDescriptorSize(), 1)
self.assertEqual(algorithm.getDescriptorChannels(), 2)
self.assertEqual(algorithm.getNOctaves(), 3)
self.assertEqual(algorithm.getThreshold(), 4.0)
self.assertEqual(algorithm.getMaxFeatures(), 200)
self.assertAlmostEqual(algorithm.getScaleFactor(), 1.3, places=6)
self.assertEqual(algorithm.getNLevels(), 5)
self.assertEqual(algorithm.getEdgeThreshold(), 28)
os.remove(fname)

View File

@ -20,9 +20,9 @@ namespace ocl {
typedef TestBaseWithParam<string> stitch;
#if defined(HAVE_OPENCV_XFEATURES2D) && defined(OPENCV_ENABLE_NONFREE)
#define TEST_DETECTORS testing::Values("surf", "orb", "akaze")
#define TEST_DETECTORS testing::Values("surf", "sift", "orb", "akaze")
#else
#define TEST_DETECTORS testing::Values("orb", "akaze")
#define TEST_DETECTORS testing::Values("orb", "sift")
#endif
OCL_PERF_TEST_P(stitch, a123, TEST_DETECTORS)

View File

@ -6,6 +6,7 @@
#ifdef HAVE_OPENCV_XFEATURES2D
#include "opencv2/xfeatures2d/nonfree.hpp"
#include "opencv2/xfeatures2d.hpp"
#endif
namespace cv
@ -15,12 +16,16 @@ static inline Ptr<Feature2D> getFeatureFinder(const std::string& name)
{
if (name == "orb")
return ORB::create();
else if (name == "sift")
return SIFT::create();
#if defined(HAVE_OPENCV_XFEATURES2D) && defined(OPENCV_ENABLE_NONFREE)
else if (name == "surf")
return xfeatures2d::SURF::create();
#endif
#if defined(HAVE_OPENCV_XFEATURES2D)
else if (name == "akaze")
return AKAZE::create();
return xfeatures2d::AKAZE::create();
#endif
else
return Ptr<Feature2D>();
}

View File

@ -20,7 +20,7 @@ typedef TestBaseWithParam<tuple<string, int>> stitchExposureCompMultiFeed;
#if defined(HAVE_OPENCV_XFEATURES2D) && defined(OPENCV_ENABLE_NONFREE)
#define TEST_DETECTORS testing::Values("surf", "orb", "akaze")
#else
#define TEST_DETECTORS testing::Values("orb", "akaze")
#define TEST_DETECTORS testing::Values("orb")
#endif
#define TEST_EXP_COMP_BS testing::Values(32, 16, 12, 10, 8)
#define TEST_EXP_COMP_NR_FEED testing::Values(1, 2, 3, 4, 5)

View File

@ -152,16 +152,12 @@ dnn = {'dnn_Net': ['setInput', 'forward', 'setPreferableBackend','getUnconnected
'readNetFromONNX', 'readNetFromTFLite', 'readNet', 'blobFromImage']}
features2d = {'Feature2D': ['detect', 'compute', 'detectAndCompute', 'descriptorSize', 'descriptorType', 'defaultNorm', 'empty', 'getDefaultName'],
'BRISK': ['create', 'getDefaultName'],
'ORB': ['create', 'setMaxFeatures', 'setScaleFactor', 'setNLevels', 'setEdgeThreshold', 'setFastThreshold', 'setFirstLevel', 'setWTA_K', 'setScoreType', 'setPatchSize', 'getFastThreshold', 'getDefaultName'],
'MSER': ['create', 'detectRegions', 'setDelta', 'getDelta', 'setMinArea', 'getMinArea', 'setMaxArea', 'getMaxArea', 'setPass2Only', 'getPass2Only', 'getDefaultName'],
'FastFeatureDetector': ['create', 'setThreshold', 'getThreshold', 'setNonmaxSuppression', 'getNonmaxSuppression', 'setType', 'getType', 'getDefaultName'],
'AgastFeatureDetector': ['create', 'setThreshold', 'getThreshold', 'setNonmaxSuppression', 'getNonmaxSuppression', 'setType', 'getType', 'getDefaultName'],
'GFTTDetector': ['create', 'setMaxFeatures', 'getMaxFeatures', 'setQualityLevel', 'getQualityLevel', 'setMinDistance', 'getMinDistance', 'setBlockSize', 'getBlockSize', 'setHarrisDetector', 'getHarrisDetector', 'setK', 'getK', 'getDefaultName'],
'SimpleBlobDetector': ['create', 'setParams', 'getParams', 'getDefaultName'],
'SimpleBlobDetector_Params': [],
'KAZE': ['create', 'setExtended', 'getExtended', 'setUpright', 'getUpright', 'setThreshold', 'getThreshold', 'setNOctaves', 'getNOctaves', 'setNOctaveLayers', 'getNOctaveLayers', 'setDiffusivity', 'getDiffusivity', 'getDefaultName'],
'AKAZE': ['create', 'setDescriptorType', 'getDescriptorType', 'setDescriptorSize', 'getDescriptorSize', 'setDescriptorChannels', 'getDescriptorChannels', 'setThreshold', 'getThreshold', 'setNOctaves', 'getNOctaves', 'setNOctaveLayers', 'getNOctaveLayers', 'setDiffusivity', 'getDiffusivity', 'getDefaultName'],
'DescriptorMatcher': ['add', 'clear', 'empty', 'isMaskSupported', 'train', 'match', 'knnMatch', 'radiusMatch', 'clone', 'create'],
'BFMatcher': ['isMaskSupported', 'create'],
'': ['drawKeypoints', 'drawMatches', 'drawMatchesKnn']}

View File

@ -5,6 +5,9 @@
#include <opencv2/3d.hpp>
#include <iostream>
#include <iomanip>
#ifdef HAVE_OPENCV_XFEATURES2D
#include "opencv2/xfeatures2d.hpp"
#endif
using namespace std;
using namespace cv;
@ -33,7 +36,7 @@ int main(int argc, char** argv)
vector<String> fileName;
cv::CommandLineParser parser(argc, argv,
"{help h ||}"
"{feature|brisk|}"
"{feature|orb|}"
"{flann||}"
"{maxlines|50|}"
"{image1|aero1.jpg|}{image2|aero3.jpg|}");
@ -88,11 +91,16 @@ int main(int argc, char** argv)
}
else if (feature == "brisk")
{
backend = BRISK::create();
#ifdef HAVE_OPENCV_XFEATURES2D
backend = xfeatures2d::BRISK::create();
if (useFlann)
matcher = makePtr<FlannBasedMatcher>(makePtr<flann::LshIndexParams>(6, 12, 1));
else
matcher = DescriptorMatcher::create("BruteForce-Hamming");
#else
cout << "OpenCV is built without opencv_contrib modules. BRISK algorithm is not available!" << std::endl;
return -1;
#endif
}
else
{

View File

@ -424,14 +424,22 @@ int main(int argc, char* argv[])
}
else if (features_type == "akaze")
{
finder = AKAZE::create();
}
#ifdef HAVE_OPENCV_XFEATURES2D
finder = xfeatures2d::AKAZE::create();
#else
cout << "OpenCV is built without opencv_contrib modules. AKAZE algorithm is not available!" << std::endl;
return -1;
#endif
}
else if (features_type == "surf")
{
#if defined(HAVE_OPENCV_XFEATURES2D) && defined(HAVE_OPENCV_NONFREE)
finder = xfeatures2d::SURF::create();
}
#else
cout << "OpenCV is built without NONFREE modules. SURF algorithm is not available!" << std::endl;
return -1;
#endif
}
else if (features_type == "sift")
{
finder = SIFT::create();

View File

@ -308,18 +308,39 @@ void createFeatures(const std::string &featureName, int numKeypoints, cv::Ptr<cv
}
else if (featureName == "KAZE")
{
detector = cv::KAZE::create();
descriptor = cv::KAZE::create();
#if defined (HAVE_OPENCV_XFEATURES2D)
detector = cv::xfeatures2d::KAZE::create();
descriptor = cv::xfeatures2d::KAZE::create();
#else
std::cout << "xfeatures2d module is not available." << std::endl;
std::cout << "Default to ORB." << std::endl;
detector = cv::ORB::create(numKeypoints);
descriptor = cv::ORB::create(numKeypoints);
#endif
}
else if (featureName == "AKAZE")
{
detector = cv::AKAZE::create();
descriptor = cv::AKAZE::create();
#if defined (HAVE_OPENCV_XFEATURES2D)
detector = cv::xfeatures2d::AKAZE::create();
descriptor = cv::xfeatures2d::AKAZE::create();
#else
std::cout << "xfeatures2d module is not available." << std::endl;
std::cout << "Default to ORB." << std::endl;
detector = cv::ORB::create(numKeypoints);
descriptor = cv::ORB::create(numKeypoints);
#endif
}
else if (featureName == "BRISK")
{
detector = cv::BRISK::create();
descriptor = cv::BRISK::create();
#if defined (HAVE_OPENCV_XFEATURES2D)
detector = cv::xfeatures2d::BRISK::create();
descriptor = cv::xfeatures2d::BRISK::create();
#else
std::cout << "xfeatures2d module is not available." << std::endl;
std::cout << "Default to ORB." << std::endl;
detector = cv::ORB::create(numKeypoints);
descriptor = cv::ORB::create(numKeypoints);
#endif
}
else if (featureName == "SIFT")
{
@ -341,7 +362,7 @@ void createFeatures(const std::string &featureName, int numKeypoints, cv::Ptr<cv
else if (featureName == "BINBOOST")
{
#if defined (HAVE_OPENCV_XFEATURES2D)
detector = cv::KAZE::create();
detector = cv::xfeatures2d::KAZE::create();
descriptor = cv::xfeatures2d::BoostDesc::create();
#else
std::cout << "xfeatures2d module is not available." << std::endl;
@ -353,7 +374,7 @@ void createFeatures(const std::string &featureName, int numKeypoints, cv::Ptr<cv
else if (featureName == "VGG")
{
#if defined (HAVE_OPENCV_XFEATURES2D)
detector = cv::KAZE::create();
detector = cv::xfeatures2d::KAZE::create();
descriptor = cv::xfeatures2d::VGG::create();
#else
std::cout << "xfeatures2d module is not available." << std::endl;

View File

@ -1,7 +1,9 @@
#include <iostream>
#ifdef HAVE_OPENCV_XFEATURES2D
#include <opencv2/features2d.hpp>
#include "opencv2/xfeatures2d.hpp"
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
#include <iostream>
using namespace std;
using namespace cv;
@ -28,7 +30,7 @@ int main(int argc, char* argv[])
vector<KeyPoint> kpts1, kpts2;
Mat desc1, desc2;
Ptr<AKAZE> akaze = AKAZE::create();
Ptr<xfeatures2d::AKAZE> akaze = xfeatures2d::AKAZE::create();
akaze->detectAndCompute(img1, noArray(), kpts1, desc1);
akaze->detectAndCompute(img2, noArray(), kpts2, desc2);
//! [AKAZE]
@ -96,3 +98,10 @@ int main(int argc, char* argv[])
return 0;
}
#else
int main()
{
std::cout << "This tutorial code needs the xfeatures2d contrib module to be run." << std::endl;
return 0;
}
#endif

View File

@ -1,11 +1,14 @@
#include <vector>
#include <iostream>
#include <iomanip>
#ifdef HAVE_OPENCV_XFEATURES2D
#include <opencv2/features2d.hpp>
#include "opencv2/xfeatures2d.hpp"
#include <opencv2/videoio.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/3d.hpp>
#include <opencv2/highgui.hpp> //for imshow
#include <vector>
#include <iostream>
#include <iomanip>
#include "stats.h" // Stats structure definition
#include "utils.h" // Drawing and printing functions
@ -148,7 +151,7 @@ int main(int argc, char **argv)
}
Stats stats, akaze_stats, orb_stats;
Ptr<AKAZE> akaze = AKAZE::create();
Ptr<xfeatures2d::AKAZE> akaze = xfeatures2d::AKAZE::create();
akaze->setThreshold(akaze_thresh);
Ptr<ORB> orb = ORB::create();
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce-Hamming");
@ -211,3 +214,10 @@ int main(int argc, char **argv)
printStatistics("ORB", orb_stats);
return 0;
}
#else
int main()
{
std::cout << "This tutorial code needs the xfeatures2d contrib module to be run." << std::endl;
return 0;
}
#endif

View File

@ -1,115 +0,0 @@
#include <iostream>
#include "opencv2/opencv_modules.hpp"
#ifdef HAVE_OPENCV_XFEATURES2D
#include <opencv2/core.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/features2d.hpp>
#include <opencv2/xfeatures2d.hpp>
#include <opencv2/imgcodecs.hpp>
#include <vector>
// If you find this code useful, please add a reference to the following paper in your work:
// Gil Levi and Tal Hassner, "LATCH: Learned Arrangements of Three Patch Codes", arXiv preprint arXiv:1501.03719, 15 Jan. 2015
using namespace std;
using namespace cv;
const float inlier_threshold = 2.5f; // Distance threshold to identify inliers
const float nn_match_ratio = 0.8f; // Nearest neighbor matching ratio
int main(int argc, char* argv[])
{
CommandLineParser parser(argc, argv,
"{@img1 | graf1.png | input image 1}"
"{@img2 | graf3.png | input image 2}"
"{@homography | H1to3p.xml | homography matrix}");
Mat img1 = imread( samples::findFile( parser.get<String>("@img1") ), IMREAD_GRAYSCALE);
Mat img2 = imread( samples::findFile( parser.get<String>("@img2") ), IMREAD_GRAYSCALE);
Mat homography;
FileStorage fs( samples::findFile( parser.get<String>("@homography") ), FileStorage::READ);
fs.getFirstTopLevelNode() >> homography;
vector<KeyPoint> kpts1, kpts2;
Mat desc1, desc2;
Ptr<cv::ORB> orb_detector = cv::ORB::create(10000);
Ptr<xfeatures2d::LATCH> latch = xfeatures2d::LATCH::create();
orb_detector->detect(img1, kpts1);
latch->compute(img1, kpts1, desc1);
orb_detector->detect(img2, kpts2);
latch->compute(img2, kpts2, desc2);
BFMatcher matcher(NORM_HAMMING);
vector< vector<DMatch> > nn_matches;
matcher.knnMatch(desc1, desc2, nn_matches, 2);
vector<KeyPoint> matched1, matched2, inliers1, inliers2;
vector<DMatch> good_matches;
for (size_t i = 0; i < nn_matches.size(); i++) {
DMatch first = nn_matches[i][0];
float dist1 = nn_matches[i][0].distance;
float dist2 = nn_matches[i][1].distance;
if (dist1 < nn_match_ratio * dist2) {
matched1.push_back(kpts1[first.queryIdx]);
matched2.push_back(kpts2[first.trainIdx]);
}
}
for (unsigned i = 0; i < matched1.size(); i++) {
Mat col = Mat::ones(3, 1, CV_64F);
col.at<double>(0) = matched1[i].pt.x;
col.at<double>(1) = matched1[i].pt.y;
col = homography * col;
col /= col.at<double>(2);
double dist = sqrt(pow(col.at<double>(0) - matched2[i].pt.x, 2) +
pow(col.at<double>(1) - matched2[i].pt.y, 2));
if (dist < inlier_threshold) {
int new_i = static_cast<int>(inliers1.size());
inliers1.push_back(matched1[i]);
inliers2.push_back(matched2[i]);
good_matches.push_back(DMatch(new_i, new_i, 0));
}
}
Mat res;
drawMatches(img1, inliers1, img2, inliers2, good_matches, res);
imwrite("latch_result.png", res);
double inlier_ratio = inliers1.size() * 1.0 / matched1.size();
cout << "LATCH Matching Results" << endl;
cout << "*******************************" << endl;
cout << "# Keypoints 1: \t" << kpts1.size() << endl;
cout << "# Keypoints 2: \t" << kpts2.size() << endl;
cout << "# Matches: \t" << matched1.size() << endl;
cout << "# Inliers: \t" << inliers1.size() << endl;
cout << "# Inliers Ratio: \t" << inlier_ratio << endl;
cout << endl;
imshow("result", res);
waitKey();
return 0;
}
#else
int main()
{
std::cerr << "OpenCV was built without xfeatures2d module" << std::endl;
return 0;
}
#endif

View File

@ -1,96 +0,0 @@
#include <iostream>
#include "opencv2/opencv_modules.hpp"
#ifdef HAVE_OPENCV_XFEATURES2D
#include "opencv2/core.hpp"
#include "opencv2/features2d.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/cudafeatures2d.hpp"
#include "opencv2/xfeatures2d/cuda.hpp"
using namespace std;
using namespace cv;
using namespace cv::cuda;
static void help()
{
cout << "\nThis program demonstrates using SURF_CUDA features detector, descriptor extractor and BruteForceMatcher_CUDA" << endl;
cout << "\nUsage:\n\tsurf_keypoint_matcher --left <image1> --right <image2>" << endl;
}
int main(int argc, char* argv[])
{
if (argc != 5)
{
help();
return -1;
}
GpuMat img1, img2;
for (int i = 1; i < argc; ++i)
{
if (string(argv[i]) == "--left")
{
img1.upload(imread(argv[++i], IMREAD_GRAYSCALE));
CV_Assert(!img1.empty());
}
else if (string(argv[i]) == "--right")
{
img2.upload(imread(argv[++i], IMREAD_GRAYSCALE));
CV_Assert(!img2.empty());
}
else if (string(argv[i]) == "--help")
{
help();
return -1;
}
}
cv::cuda::printShortCudaDeviceInfo(cv::cuda::getDevice());
SURF_CUDA surf;
// detecting keypoints & computing descriptors
GpuMat keypoints1GPU, keypoints2GPU;
GpuMat descriptors1GPU, descriptors2GPU;
surf(img1, GpuMat(), keypoints1GPU, descriptors1GPU);
surf(img2, GpuMat(), keypoints2GPU, descriptors2GPU);
cout << "FOUND " << keypoints1GPU.cols << " keypoints on first image" << endl;
cout << "FOUND " << keypoints2GPU.cols << " keypoints on second image" << endl;
// matching descriptors
Ptr<cv::cuda::DescriptorMatcher> matcher = cv::cuda::DescriptorMatcher::createBFMatcher(surf.defaultNorm());
vector<DMatch> matches;
matcher->match(descriptors1GPU, descriptors2GPU, matches);
// downloading results
vector<KeyPoint> keypoints1, keypoints2;
vector<float> descriptors1, descriptors2;
surf.downloadKeypoints(keypoints1GPU, keypoints1);
surf.downloadKeypoints(keypoints2GPU, keypoints2);
surf.downloadDescriptors(descriptors1GPU, descriptors1);
surf.downloadDescriptors(descriptors2GPU, descriptors2);
// drawing the results
Mat img_matches;
drawMatches(Mat(img1), keypoints1, Mat(img2), keypoints2, matches, img_matches);
namedWindow("matches", 0);
imshow("matches", img_matches);
waitKey(0);
return 0;
}
#else
int main()
{
std::cerr << "OpenCV was built without xfeatures2d module" << std::endl;
return 0;
}
#endif

View File

@ -15,7 +15,7 @@ import org.opencv.core.Mat;
import org.opencv.core.MatOfDMatch;
import org.opencv.core.MatOfKeyPoint;
import org.opencv.core.Scalar;
import org.opencv.features2d.AKAZE;
import org.opencv.xfeatures2d.AKAZE;
import org.opencv.features2d.DescriptorMatcher;
import org.opencv.features2d.Features2d;
import org.opencv.highgui.HighGui;

View File

@ -40,11 +40,13 @@ try:
except AttributeError:
print("SIFT not available")
try:
FEATURES_FIND_CHOICES['brisk'] = cv.BRISK_create
cv.xfeatures2d_BRISK.create() # check if the function can be called
FEATURES_FIND_CHOICES['brisk'] = cv.xfeatures2d_BRISK.create
except AttributeError:
print("BRISK not available")
try:
FEATURES_FIND_CHOICES['akaze'] = cv.AKAZE_create
cv.xfeatures2d_AKAZE.create() # check if the function can be called
FEATURES_FIND_CHOICES['akaze'] = cv.xfeatures2d_AKAZE.create
except AttributeError:
print("AKAZE not available")
@ -276,7 +278,6 @@ def get_compensator(args):
def main():
args = parser.parse_args()
img_names = args.img_names
print(img_names)
work_megapix = args.work_megapix
seam_megapix = args.seam_megapix
compose_megapix = args.compose_megapix

View File

@ -22,7 +22,7 @@ homography = fs.getFirstTopLevelNode().mat()
## [load]
## [AKAZE]
akaze = cv.AKAZE_create()
akaze = cv.xfeatures2d.AKAZE_create()
kpts1, desc1 = akaze.detectAndCompute(img1, None)
kpts2, desc2 = akaze.detectAndCompute(img2, None)
## [AKAZE]