From 70ce2bbb9e6107247fe4a7a7771dfa1d962af7dc Mon Sep 17 00:00:00 2001 From: dab0bby <9976654+dab0bby@users.noreply.github.com> Date: Wed, 21 Aug 2019 14:52:31 +0200 Subject: [PATCH] fix typo and reference --- doc/py_tutorials/py_feature2d/py_matcher/py_matcher.markdown | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/doc/py_tutorials/py_feature2d/py_matcher/py_matcher.markdown b/doc/py_tutorials/py_feature2d/py_matcher/py_matcher.markdown index ca7853d96a..d8ba8f856d 100644 --- a/doc/py_tutorials/py_feature2d/py_matcher/py_matcher.markdown +++ b/doc/py_tutorials/py_feature2d/py_matcher/py_matcher.markdown @@ -38,13 +38,13 @@ best matches. There is also **cv.drawMatchesKnn** which draws all the k best mat will draw two match-lines for each keypoint. So we have to pass a mask if we want to selectively draw it. -Let's see one example for each of SURF and ORB (Both use different distance measurements). +Let's see one example for each of SIFT and ORB (Both use different distance measurements). ### Brute-Force Matching with ORB Descriptors Here, we will see a simple example on how to match features between two images. In this case, I have a queryImage and a trainImage. We will try to find the queryImage in trainImage using feature -matching. ( The images are /samples/c/box.png and /samples/c/box_in_scene.png) +matching. ( The images are /samples/data/box.png and /samples/data/box_in_scene.png) We are using ORB descriptors to match features. So let's start with loading images, finding descriptors etc.